Oct 14 02:38:33 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Oct 14 02:38:33 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Oct 14 02:38:33 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 14 02:38:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 14 02:38:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 14 02:38:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 14 02:38:33 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 14 02:38:33 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 14 02:38:33 localhost kernel: signal: max sigframe size: 1776 Oct 14 02:38:33 localhost kernel: BIOS-provided physical RAM map: Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 14 02:38:33 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Oct 14 02:38:33 localhost kernel: NX (Execute Disable) protection: active Oct 14 02:38:33 localhost kernel: SMBIOS 2.8 present. Oct 14 02:38:33 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 14 02:38:33 localhost kernel: Hypervisor detected: KVM Oct 14 02:38:33 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 14 02:38:33 localhost kernel: kvm-clock: using sched offset of 2377835588 cycles Oct 14 02:38:33 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 14 02:38:33 localhost kernel: tsc: Detected 2799.998 MHz processor Oct 14 02:38:33 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Oct 14 02:38:33 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 14 02:38:33 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Oct 14 02:38:33 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Oct 14 02:38:33 localhost kernel: Using GB pages for direct mapping Oct 14 02:38:33 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Oct 14 02:38:33 localhost kernel: ACPI: Early table checksum verification disabled Oct 14 02:38:33 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 14 02:38:33 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 14 02:38:33 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 14 02:38:33 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 14 02:38:33 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Oct 14 02:38:33 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 14 02:38:33 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 14 02:38:33 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Oct 14 02:38:33 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Oct 14 02:38:33 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Oct 14 02:38:33 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Oct 14 02:38:33 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Oct 14 02:38:33 localhost kernel: No NUMA configuration found Oct 14 02:38:33 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Oct 14 02:38:33 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Oct 14 02:38:33 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Oct 14 02:38:33 localhost kernel: Zone ranges: Oct 14 02:38:33 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 14 02:38:33 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 14 02:38:33 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Oct 14 02:38:33 localhost kernel: Device empty Oct 14 02:38:33 localhost kernel: Movable zone start for each node Oct 14 02:38:33 localhost kernel: Early memory node ranges Oct 14 02:38:33 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 14 02:38:33 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Oct 14 02:38:33 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Oct 14 02:38:33 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Oct 14 02:38:33 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 14 02:38:33 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 14 02:38:33 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Oct 14 02:38:33 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Oct 14 02:38:33 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 14 02:38:33 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 14 02:38:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 14 02:38:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 14 02:38:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 14 02:38:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 14 02:38:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 14 02:38:33 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 14 02:38:33 localhost kernel: TSC deadline timer available Oct 14 02:38:33 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Oct 14 02:38:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Oct 14 02:38:33 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Oct 14 02:38:33 localhost kernel: Booting paravirtualized kernel on KVM Oct 14 02:38:33 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 14 02:38:33 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Oct 14 02:38:33 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Oct 14 02:38:33 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Oct 14 02:38:33 localhost kernel: Fallback order for Node 0: 0 Oct 14 02:38:33 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Oct 14 02:38:33 localhost kernel: Policy zone: Normal Oct 14 02:38:33 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 14 02:38:33 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Oct 14 02:38:33 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Oct 14 02:38:33 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 14 02:38:33 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 14 02:38:33 localhost kernel: software IO TLB: area num 8. Oct 14 02:38:33 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Oct 14 02:38:33 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Oct 14 02:38:33 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Oct 14 02:38:33 localhost kernel: ftrace: allocating 44803 entries in 176 pages Oct 14 02:38:33 localhost kernel: ftrace: allocated 176 pages with 3 groups Oct 14 02:38:33 localhost kernel: Dynamic Preempt: voluntary Oct 14 02:38:33 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Oct 14 02:38:33 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Oct 14 02:38:33 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Oct 14 02:38:33 localhost kernel: #011Rude variant of Tasks RCU enabled. Oct 14 02:38:33 localhost kernel: #011Tracing variant of Tasks RCU enabled. Oct 14 02:38:33 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 14 02:38:33 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Oct 14 02:38:33 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Oct 14 02:38:33 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 14 02:38:33 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Oct 14 02:38:33 localhost kernel: random: crng init done (trusting CPU's manufacturer) Oct 14 02:38:33 localhost kernel: Console: colour VGA+ 80x25 Oct 14 02:38:33 localhost kernel: printk: console [tty0] enabled Oct 14 02:38:33 localhost kernel: printk: console [ttyS0] enabled Oct 14 02:38:33 localhost kernel: ACPI: Core revision 20211217 Oct 14 02:38:33 localhost kernel: APIC: Switch to symmetric I/O mode setup Oct 14 02:38:33 localhost kernel: x2apic enabled Oct 14 02:38:33 localhost kernel: Switched APIC routing to physical x2apic. Oct 14 02:38:33 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 14 02:38:33 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 14 02:38:33 localhost kernel: pid_max: default: 32768 minimum: 301 Oct 14 02:38:33 localhost kernel: LSM: Security Framework initializing Oct 14 02:38:33 localhost kernel: Yama: becoming mindful. Oct 14 02:38:33 localhost kernel: SELinux: Initializing. Oct 14 02:38:33 localhost kernel: LSM support for eBPF active Oct 14 02:38:33 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 14 02:38:33 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 14 02:38:33 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 14 02:38:33 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 14 02:38:33 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 14 02:38:33 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 14 02:38:33 localhost kernel: Spectre V2 : Mitigation: Retpolines Oct 14 02:38:33 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 14 02:38:33 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 14 02:38:33 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 14 02:38:33 localhost kernel: RETBleed: Mitigation: untrained return thunk Oct 14 02:38:33 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 14 02:38:33 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 14 02:38:33 localhost kernel: Freeing SMP alternatives memory: 36K Oct 14 02:38:33 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 14 02:38:33 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Oct 14 02:38:33 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 14 02:38:33 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 14 02:38:33 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 14 02:38:33 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 14 02:38:33 localhost kernel: ... version: 0 Oct 14 02:38:33 localhost kernel: ... bit width: 48 Oct 14 02:38:33 localhost kernel: ... generic registers: 6 Oct 14 02:38:33 localhost kernel: ... value mask: 0000ffffffffffff Oct 14 02:38:33 localhost kernel: ... max period: 00007fffffffffff Oct 14 02:38:33 localhost kernel: ... fixed-purpose events: 0 Oct 14 02:38:33 localhost kernel: ... event mask: 000000000000003f Oct 14 02:38:33 localhost kernel: rcu: Hierarchical SRCU implementation. Oct 14 02:38:33 localhost kernel: rcu: #011Max phase no-delay instances is 400. Oct 14 02:38:33 localhost kernel: smp: Bringing up secondary CPUs ... Oct 14 02:38:33 localhost kernel: x86: Booting SMP configuration: Oct 14 02:38:33 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Oct 14 02:38:33 localhost kernel: smp: Brought up 1 node, 8 CPUs Oct 14 02:38:33 localhost kernel: smpboot: Max logical packages: 8 Oct 14 02:38:33 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Oct 14 02:38:33 localhost kernel: node 0 deferred pages initialised in 24ms Oct 14 02:38:33 localhost kernel: devtmpfs: initialized Oct 14 02:38:33 localhost kernel: x86/mm: Memory block size: 128MB Oct 14 02:38:33 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 14 02:38:33 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Oct 14 02:38:33 localhost kernel: pinctrl core: initialized pinctrl subsystem Oct 14 02:38:33 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 14 02:38:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Oct 14 02:38:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 14 02:38:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 14 02:38:33 localhost kernel: audit: initializing netlink subsys (disabled) Oct 14 02:38:33 localhost kernel: audit: type=2000 audit(1760423912.508:1): state=initialized audit_enabled=0 res=1 Oct 14 02:38:33 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Oct 14 02:38:33 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 14 02:38:33 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Oct 14 02:38:33 localhost kernel: cpuidle: using governor menu Oct 14 02:38:33 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Oct 14 02:38:33 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 14 02:38:33 localhost kernel: PCI: Using configuration type 1 for base access Oct 14 02:38:33 localhost kernel: PCI: Using configuration type 1 for extended access Oct 14 02:38:33 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 14 02:38:33 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Oct 14 02:38:33 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 14 02:38:33 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 14 02:38:33 localhost kernel: cryptd: max_cpu_qlen set to 1000 Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Module Device) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Processor Device) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 14 02:38:33 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 14 02:38:33 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 14 02:38:33 localhost kernel: ACPI: Interpreter enabled Oct 14 02:38:33 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Oct 14 02:38:33 localhost kernel: ACPI: Using IOAPIC for interrupt routing Oct 14 02:38:33 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 14 02:38:33 localhost kernel: PCI: Using E820 reservations for host bridge windows Oct 14 02:38:33 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 14 02:38:33 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 14 02:38:33 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Oct 14 02:38:33 localhost kernel: acpiphp: Slot [3] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [4] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [5] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [6] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [7] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [8] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [9] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [10] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [11] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [12] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [13] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [14] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [15] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [16] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [17] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [18] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [19] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [20] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [21] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [22] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [23] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [24] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [25] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [26] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [27] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [28] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [29] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [30] registered Oct 14 02:38:33 localhost kernel: acpiphp: Slot [31] registered Oct 14 02:38:33 localhost kernel: PCI host bridge to bus 0000:00 Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 14 02:38:33 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 14 02:38:33 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 14 02:38:33 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 14 02:38:33 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 14 02:38:33 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 14 02:38:33 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 14 02:38:33 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 14 02:38:33 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 14 02:38:33 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 14 02:38:33 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 14 02:38:33 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 14 02:38:33 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 14 02:38:33 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Oct 14 02:38:33 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 14 02:38:33 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 14 02:38:33 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 14 02:38:33 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 14 02:38:33 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 14 02:38:33 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 14 02:38:33 localhost kernel: iommu: Default domain type: Translated Oct 14 02:38:33 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 14 02:38:33 localhost kernel: SCSI subsystem initialized Oct 14 02:38:33 localhost kernel: ACPI: bus type USB registered Oct 14 02:38:33 localhost kernel: usbcore: registered new interface driver usbfs Oct 14 02:38:33 localhost kernel: usbcore: registered new interface driver hub Oct 14 02:38:33 localhost kernel: usbcore: registered new device driver usb Oct 14 02:38:33 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Oct 14 02:38:33 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 14 02:38:33 localhost kernel: PTP clock support registered Oct 14 02:38:33 localhost kernel: EDAC MC: Ver: 3.0.0 Oct 14 02:38:33 localhost kernel: NetLabel: Initializing Oct 14 02:38:33 localhost kernel: NetLabel: domain hash size = 128 Oct 14 02:38:33 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Oct 14 02:38:33 localhost kernel: NetLabel: unlabeled traffic allowed by default Oct 14 02:38:33 localhost kernel: PCI: Using ACPI for IRQ routing Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 14 02:38:33 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 14 02:38:33 localhost kernel: vgaarb: loaded Oct 14 02:38:33 localhost kernel: clocksource: Switched to clocksource kvm-clock Oct 14 02:38:33 localhost kernel: VFS: Disk quotas dquot_6.6.0 Oct 14 02:38:33 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 14 02:38:33 localhost kernel: pnp: PnP ACPI init Oct 14 02:38:33 localhost kernel: pnp: PnP ACPI: found 5 devices Oct 14 02:38:33 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 14 02:38:33 localhost kernel: NET: Registered PF_INET protocol family Oct 14 02:38:33 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 14 02:38:33 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Oct 14 02:38:33 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 14 02:38:33 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 14 02:38:33 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 14 02:38:33 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Oct 14 02:38:33 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Oct 14 02:38:33 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 14 02:38:33 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 14 02:38:33 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 14 02:38:33 localhost kernel: NET: Registered PF_XDP protocol family Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Oct 14 02:38:33 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Oct 14 02:38:33 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 14 02:38:33 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 14 02:38:33 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 14 02:38:33 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 30741 usecs Oct 14 02:38:33 localhost kernel: PCI: CLS 0 bytes, default 64 Oct 14 02:38:33 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 14 02:38:33 localhost kernel: Trying to unpack rootfs image as initramfs... Oct 14 02:38:33 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Oct 14 02:38:33 localhost kernel: ACPI: bus type thunderbolt registered Oct 14 02:38:33 localhost kernel: Initialise system trusted keyrings Oct 14 02:38:33 localhost kernel: Key type blacklist registered Oct 14 02:38:33 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Oct 14 02:38:33 localhost kernel: zbud: loaded Oct 14 02:38:33 localhost kernel: integrity: Platform Keyring initialized Oct 14 02:38:33 localhost kernel: NET: Registered PF_ALG protocol family Oct 14 02:38:33 localhost kernel: xor: automatically using best checksumming function avx Oct 14 02:38:33 localhost kernel: Key type asymmetric registered Oct 14 02:38:33 localhost kernel: Asymmetric key parser 'x509' registered Oct 14 02:38:33 localhost kernel: Running certificate verification selftests Oct 14 02:38:33 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Oct 14 02:38:33 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Oct 14 02:38:33 localhost kernel: io scheduler mq-deadline registered Oct 14 02:38:33 localhost kernel: io scheduler kyber registered Oct 14 02:38:33 localhost kernel: io scheduler bfq registered Oct 14 02:38:33 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Oct 14 02:38:33 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Oct 14 02:38:33 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Oct 14 02:38:33 localhost kernel: ACPI: button: Power Button [PWRF] Oct 14 02:38:33 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 14 02:38:33 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 14 02:38:33 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 14 02:38:33 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 14 02:38:33 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 14 02:38:33 localhost kernel: Non-volatile memory driver v1.3 Oct 14 02:38:33 localhost kernel: rdac: device handler registered Oct 14 02:38:33 localhost kernel: hp_sw: device handler registered Oct 14 02:38:33 localhost kernel: emc: device handler registered Oct 14 02:38:33 localhost kernel: alua: device handler registered Oct 14 02:38:33 localhost kernel: libphy: Fixed MDIO Bus: probed Oct 14 02:38:33 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Oct 14 02:38:33 localhost kernel: ehci-pci: EHCI PCI platform driver Oct 14 02:38:33 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Oct 14 02:38:33 localhost kernel: ohci-pci: OHCI PCI platform driver Oct 14 02:38:33 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Oct 14 02:38:33 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 14 02:38:33 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 14 02:38:33 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 14 02:38:33 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Oct 14 02:38:33 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Oct 14 02:38:33 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Oct 14 02:38:33 localhost kernel: usb usb1: Product: UHCI Host Controller Oct 14 02:38:33 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Oct 14 02:38:33 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Oct 14 02:38:33 localhost kernel: hub 1-0:1.0: USB hub found Oct 14 02:38:33 localhost kernel: hub 1-0:1.0: 2 ports detected Oct 14 02:38:33 localhost kernel: usbcore: registered new interface driver usbserial_generic Oct 14 02:38:33 localhost kernel: usbserial: USB Serial support registered for generic Oct 14 02:38:33 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 14 02:38:33 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 14 02:38:33 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 14 02:38:33 localhost kernel: mousedev: PS/2 mouse device common for all mice Oct 14 02:38:33 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 14 02:38:33 localhost kernel: rtc_cmos 00:04: registered as rtc0 Oct 14 02:38:33 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Oct 14 02:38:33 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-14T06:38:32 UTC (1760423912) Oct 14 02:38:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Oct 14 02:38:33 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 14 02:38:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Oct 14 02:38:33 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Oct 14 02:38:33 localhost kernel: usbcore: registered new interface driver usbhid Oct 14 02:38:33 localhost kernel: usbhid: USB HID core driver Oct 14 02:38:33 localhost kernel: drop_monitor: Initializing network drop monitor service Oct 14 02:38:33 localhost kernel: Initializing XFRM netlink socket Oct 14 02:38:33 localhost kernel: NET: Registered PF_INET6 protocol family Oct 14 02:38:33 localhost kernel: Segment Routing with IPv6 Oct 14 02:38:33 localhost kernel: NET: Registered PF_PACKET protocol family Oct 14 02:38:33 localhost kernel: mpls_gso: MPLS GSO support Oct 14 02:38:33 localhost kernel: IPI shorthand broadcast: enabled Oct 14 02:38:33 localhost kernel: AVX2 version of gcm_enc/dec engaged. Oct 14 02:38:33 localhost kernel: AES CTR mode by8 optimization enabled Oct 14 02:38:33 localhost kernel: sched_clock: Marking stable (796733369, 185958877)->(1116718849, -134026603) Oct 14 02:38:33 localhost kernel: registered taskstats version 1 Oct 14 02:38:33 localhost kernel: Loading compiled-in X.509 certificates Oct 14 02:38:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 14 02:38:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Oct 14 02:38:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Oct 14 02:38:33 localhost kernel: zswap: loaded using pool lzo/zbud Oct 14 02:38:33 localhost kernel: page_owner is disabled Oct 14 02:38:33 localhost kernel: Key type big_key registered Oct 14 02:38:33 localhost kernel: Freeing initrd memory: 74232K Oct 14 02:38:33 localhost kernel: Key type encrypted registered Oct 14 02:38:33 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Oct 14 02:38:33 localhost kernel: Loading compiled-in module X.509 certificates Oct 14 02:38:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 14 02:38:33 localhost kernel: ima: Allocated hash algorithm: sha256 Oct 14 02:38:33 localhost kernel: ima: No architecture policies found Oct 14 02:38:33 localhost kernel: evm: Initialising EVM extended attributes: Oct 14 02:38:33 localhost kernel: evm: security.selinux Oct 14 02:38:33 localhost kernel: evm: security.SMACK64 (disabled) Oct 14 02:38:33 localhost kernel: evm: security.SMACK64EXEC (disabled) Oct 14 02:38:33 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Oct 14 02:38:33 localhost kernel: evm: security.SMACK64MMAP (disabled) Oct 14 02:38:33 localhost kernel: evm: security.apparmor (disabled) Oct 14 02:38:33 localhost kernel: evm: security.ima Oct 14 02:38:33 localhost kernel: evm: security.capability Oct 14 02:38:33 localhost kernel: evm: HMAC attrs: 0x1 Oct 14 02:38:33 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Oct 14 02:38:33 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Oct 14 02:38:33 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Oct 14 02:38:33 localhost kernel: usb 1-1: Product: QEMU USB Tablet Oct 14 02:38:33 localhost kernel: usb 1-1: Manufacturer: QEMU Oct 14 02:38:33 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Oct 14 02:38:33 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Oct 14 02:38:33 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Oct 14 02:38:33 localhost kernel: Freeing unused decrypted memory: 2036K Oct 14 02:38:33 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Oct 14 02:38:33 localhost kernel: Write protecting the kernel read-only data: 26624k Oct 14 02:38:33 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 14 02:38:33 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Oct 14 02:38:33 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Oct 14 02:38:33 localhost kernel: Run /init as init process Oct 14 02:38:33 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 14 02:38:33 localhost systemd[1]: Detected virtualization kvm. Oct 14 02:38:33 localhost systemd[1]: Detected architecture x86-64. Oct 14 02:38:33 localhost systemd[1]: Running in initrd. Oct 14 02:38:33 localhost systemd[1]: No hostname configured, using default hostname. Oct 14 02:38:33 localhost systemd[1]: Hostname set to . Oct 14 02:38:33 localhost systemd[1]: Initializing machine ID from VM UUID. Oct 14 02:38:33 localhost systemd[1]: Queued start job for default target Initrd Default Target. Oct 14 02:38:33 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 14 02:38:33 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 14 02:38:33 localhost systemd[1]: Reached target Initrd /usr File System. Oct 14 02:38:33 localhost systemd[1]: Reached target Local File Systems. Oct 14 02:38:33 localhost systemd[1]: Reached target Path Units. Oct 14 02:38:33 localhost systemd[1]: Reached target Slice Units. Oct 14 02:38:33 localhost systemd[1]: Reached target Swaps. Oct 14 02:38:33 localhost systemd[1]: Reached target Timer Units. Oct 14 02:38:33 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 14 02:38:33 localhost systemd[1]: Listening on Journal Socket (/dev/log). Oct 14 02:38:33 localhost systemd[1]: Listening on Journal Socket. Oct 14 02:38:33 localhost systemd[1]: Listening on udev Control Socket. Oct 14 02:38:33 localhost systemd[1]: Listening on udev Kernel Socket. Oct 14 02:38:33 localhost systemd[1]: Reached target Socket Units. Oct 14 02:38:33 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 14 02:38:33 localhost systemd[1]: Starting Journal Service... Oct 14 02:38:33 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 02:38:33 localhost systemd[1]: Starting Create System Users... Oct 14 02:38:33 localhost systemd[1]: Starting Setup Virtual Console... Oct 14 02:38:33 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 14 02:38:33 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 02:38:33 localhost systemd-journald[284]: Journal started Oct 14 02:38:33 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/adf6dc17eeaa420ba893ea8f9e53b331) is 8.0M, max 314.7M, 306.7M free. Oct 14 02:38:33 localhost systemd-modules-load[285]: Module 'msr' is built in Oct 14 02:38:33 localhost systemd[1]: Started Journal Service. Oct 14 02:38:33 localhost systemd[1]: Finished Setup Virtual Console. Oct 14 02:38:33 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Oct 14 02:38:33 localhost systemd[1]: Starting dracut cmdline hook... Oct 14 02:38:33 localhost systemd[1]: Starting Apply Kernel Variables... Oct 14 02:38:33 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Oct 14 02:38:33 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Oct 14 02:38:33 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Oct 14 02:38:33 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Oct 14 02:38:33 localhost systemd[1]: Finished Create System Users. Oct 14 02:38:33 localhost systemd[1]: Finished Apply Kernel Variables. Oct 14 02:38:33 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 14 02:38:33 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Oct 14 02:38:33 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 14 02:38:33 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 14 02:38:33 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 14 02:38:33 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 14 02:38:33 localhost systemd[1]: Finished dracut cmdline hook. Oct 14 02:38:33 localhost systemd[1]: Starting dracut pre-udev hook... Oct 14 02:38:33 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 14 02:38:33 localhost kernel: device-mapper: uevent: version 1.0.3 Oct 14 02:38:33 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Oct 14 02:38:33 localhost kernel: RPC: Registered named UNIX socket transport module. Oct 14 02:38:33 localhost kernel: RPC: Registered udp transport module. Oct 14 02:38:33 localhost kernel: RPC: Registered tcp transport module. Oct 14 02:38:33 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Oct 14 02:38:33 localhost rpc.statd[408]: Version 2.5.4 starting Oct 14 02:38:33 localhost rpc.statd[408]: Initializing NSM state Oct 14 02:38:33 localhost rpc.idmapd[413]: Setting log level to 0 Oct 14 02:38:33 localhost systemd[1]: Finished dracut pre-udev hook. Oct 14 02:38:33 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 14 02:38:33 localhost systemd-udevd[426]: Using default interface naming scheme 'rhel-9.0'. Oct 14 02:38:33 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 14 02:38:33 localhost systemd[1]: Starting dracut pre-trigger hook... Oct 14 02:38:33 localhost systemd[1]: Finished dracut pre-trigger hook. Oct 14 02:38:33 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 14 02:38:33 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 14 02:38:33 localhost systemd[1]: Reached target System Initialization. Oct 14 02:38:33 localhost systemd[1]: Reached target Basic System. Oct 14 02:38:33 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 14 02:38:33 localhost systemd[1]: Reached target Network. Oct 14 02:38:33 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 14 02:38:33 localhost systemd[1]: Starting dracut initqueue hook... Oct 14 02:38:33 localhost systemd-udevd[450]: Network interface NamePolicy= disabled on kernel command line. Oct 14 02:38:33 localhost kernel: scsi host0: ata_piix Oct 14 02:38:33 localhost kernel: scsi host1: ata_piix Oct 14 02:38:33 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Oct 14 02:38:33 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Oct 14 02:38:33 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 14 02:38:33 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Oct 14 02:38:33 localhost kernel: GPT:20971519 != 838860799 Oct 14 02:38:33 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Oct 14 02:38:33 localhost kernel: GPT:20971519 != 838860799 Oct 14 02:38:33 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Oct 14 02:38:33 localhost kernel: vda: vda1 vda2 vda3 vda4 Oct 14 02:38:33 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 14 02:38:34 localhost systemd[1]: Reached target Initrd Root Device. Oct 14 02:38:34 localhost kernel: ata1: found unknown device (class 0) Oct 14 02:38:34 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 14 02:38:34 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 14 02:38:34 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Oct 14 02:38:34 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 14 02:38:34 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 14 02:38:34 localhost systemd[1]: Finished dracut initqueue hook. Oct 14 02:38:34 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 14 02:38:34 localhost systemd[1]: Reached target Remote Encrypted Volumes. Oct 14 02:38:34 localhost systemd[1]: Reached target Remote File Systems. Oct 14 02:38:34 localhost systemd[1]: Starting dracut pre-mount hook... Oct 14 02:38:34 localhost systemd[1]: Finished dracut pre-mount hook. Oct 14 02:38:34 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Oct 14 02:38:34 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Oct 14 02:38:34 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 14 02:38:34 localhost systemd[1]: Mounting /sysroot... Oct 14 02:38:34 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Oct 14 02:38:34 localhost kernel: XFS (vda4): Mounting V5 Filesystem Oct 14 02:38:34 localhost kernel: XFS (vda4): Ending clean mount Oct 14 02:38:34 localhost systemd[1]: Mounted /sysroot. Oct 14 02:38:34 localhost systemd[1]: Reached target Initrd Root File System. Oct 14 02:38:34 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Oct 14 02:38:34 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Oct 14 02:38:34 localhost systemd[1]: Reached target Initrd File Systems. Oct 14 02:38:34 localhost systemd[1]: Reached target Initrd Default Target. Oct 14 02:38:34 localhost systemd[1]: Starting dracut mount hook... Oct 14 02:38:34 localhost systemd[1]: Finished dracut mount hook. Oct 14 02:38:34 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Oct 14 02:38:34 localhost rpc.idmapd[413]: exiting on signal 15 Oct 14 02:38:34 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Oct 14 02:38:34 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Oct 14 02:38:34 localhost systemd[1]: Stopped target Network. Oct 14 02:38:34 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Oct 14 02:38:34 localhost systemd[1]: Stopped target Timer Units. Oct 14 02:38:34 localhost systemd[1]: dbus.socket: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Oct 14 02:38:34 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Oct 14 02:38:34 localhost systemd[1]: Stopped target Initrd Default Target. Oct 14 02:38:34 localhost systemd[1]: Stopped target Basic System. Oct 14 02:38:34 localhost systemd[1]: Stopped target Initrd Root Device. Oct 14 02:38:34 localhost systemd[1]: Stopped target Initrd /usr File System. Oct 14 02:38:34 localhost systemd[1]: Stopped target Path Units. Oct 14 02:38:34 localhost systemd[1]: Stopped target Remote File Systems. Oct 14 02:38:34 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Oct 14 02:38:34 localhost systemd[1]: Stopped target Slice Units. Oct 14 02:38:34 localhost systemd[1]: Stopped target Socket Units. Oct 14 02:38:34 localhost systemd[1]: Stopped target System Initialization. Oct 14 02:38:34 localhost systemd[1]: Stopped target Local File Systems. Oct 14 02:38:34 localhost systemd[1]: Stopped target Swaps. Oct 14 02:38:34 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut mount hook. Oct 14 02:38:34 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut pre-mount hook. Oct 14 02:38:34 localhost systemd[1]: Stopped target Local Encrypted Volumes. Oct 14 02:38:34 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Oct 14 02:38:34 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut initqueue hook. Oct 14 02:38:34 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 14 02:38:34 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Load Kernel Modules. Oct 14 02:38:34 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Create Volatile Files and Directories. Oct 14 02:38:34 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Coldplug All udev Devices. Oct 14 02:38:34 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut pre-trigger hook. Oct 14 02:38:34 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 14 02:38:34 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Setup Virtual Console. Oct 14 02:38:34 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 14 02:38:34 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Oct 14 02:38:34 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Closed udev Control Socket. Oct 14 02:38:34 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Closed udev Kernel Socket. Oct 14 02:38:34 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut pre-udev hook. Oct 14 02:38:34 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped dracut cmdline hook. Oct 14 02:38:34 localhost systemd[1]: Starting Cleanup udev Database... Oct 14 02:38:34 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Oct 14 02:38:34 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Create List of Static Device Nodes. Oct 14 02:38:34 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Oct 14 02:38:34 localhost systemd[1]: Stopped Create System Users. Oct 14 02:38:35 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 14 02:38:35 localhost systemd[1]: Finished Cleanup udev Database. Oct 14 02:38:35 localhost systemd[1]: Reached target Switch Root. Oct 14 02:38:35 localhost systemd[1]: Starting Switch Root... Oct 14 02:38:35 localhost systemd[1]: Switching root. Oct 14 02:38:35 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Oct 14 02:38:35 localhost systemd-journald[284]: Journal stopped Oct 14 02:38:36 localhost kernel: audit: type=1404 audit(1760423915.177:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 02:38:36 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 02:38:36 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 02:38:36 localhost kernel: audit: type=1403 audit(1760423915.312:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 14 02:38:36 localhost systemd[1]: Successfully loaded SELinux policy in 138.937ms. Oct 14 02:38:36 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.108ms. Oct 14 02:38:36 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 14 02:38:36 localhost systemd[1]: Detected virtualization kvm. Oct 14 02:38:36 localhost systemd[1]: Detected architecture x86-64. Oct 14 02:38:36 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 02:38:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 02:38:36 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Stopped Switch Root. Oct 14 02:38:36 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 14 02:38:36 localhost systemd[1]: Created slice Slice /system/getty. Oct 14 02:38:36 localhost systemd[1]: Created slice Slice /system/modprobe. Oct 14 02:38:36 localhost systemd[1]: Created slice Slice /system/serial-getty. Oct 14 02:38:36 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Oct 14 02:38:36 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Oct 14 02:38:36 localhost systemd[1]: Created slice User and Session Slice. Oct 14 02:38:36 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 14 02:38:36 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Oct 14 02:38:36 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Oct 14 02:38:36 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 14 02:38:36 localhost systemd[1]: Stopped target Switch Root. Oct 14 02:38:36 localhost systemd[1]: Stopped target Initrd File Systems. Oct 14 02:38:36 localhost systemd[1]: Stopped target Initrd Root File System. Oct 14 02:38:36 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Oct 14 02:38:36 localhost systemd[1]: Reached target Path Units. Oct 14 02:38:36 localhost systemd[1]: Reached target rpc_pipefs.target. Oct 14 02:38:36 localhost systemd[1]: Reached target Slice Units. Oct 14 02:38:36 localhost systemd[1]: Reached target Swaps. Oct 14 02:38:36 localhost systemd[1]: Reached target Local Verity Protected Volumes. Oct 14 02:38:36 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Oct 14 02:38:36 localhost systemd[1]: Reached target RPC Port Mapper. Oct 14 02:38:36 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 14 02:38:36 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Oct 14 02:38:36 localhost systemd[1]: Listening on udev Control Socket. Oct 14 02:38:36 localhost systemd[1]: Listening on udev Kernel Socket. Oct 14 02:38:36 localhost systemd[1]: Mounting Huge Pages File System... Oct 14 02:38:36 localhost systemd[1]: Mounting POSIX Message Queue File System... Oct 14 02:38:36 localhost systemd[1]: Mounting Kernel Debug File System... Oct 14 02:38:36 localhost systemd[1]: Mounting Kernel Trace File System... Oct 14 02:38:36 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 14 02:38:36 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 14 02:38:36 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 14 02:38:36 localhost systemd[1]: Starting Load Kernel Module drm... Oct 14 02:38:36 localhost systemd[1]: Starting Load Kernel Module fuse... Oct 14 02:38:36 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Oct 14 02:38:36 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Stopped File System Check on Root Device. Oct 14 02:38:36 localhost systemd[1]: Stopped Journal Service. Oct 14 02:38:36 localhost systemd[1]: Starting Journal Service... Oct 14 02:38:36 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 02:38:36 localhost systemd[1]: Starting Generate network units from Kernel command line... Oct 14 02:38:36 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Oct 14 02:38:36 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Oct 14 02:38:36 localhost kernel: fuse: init (API version 7.36) Oct 14 02:38:36 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 14 02:38:36 localhost systemd[1]: Mounted Huge Pages File System. Oct 14 02:38:36 localhost systemd[1]: Mounted POSIX Message Queue File System. Oct 14 02:38:36 localhost systemd-journald[618]: Journal started Oct 14 02:38:36 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/8e1d5208cffec42b50976967e1d1cfd0) is 8.0M, max 314.7M, 306.7M free. Oct 14 02:38:36 localhost systemd[1]: Queued start job for default target Multi-User System. Oct 14 02:38:36 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd-modules-load[619]: Module 'msr' is built in Oct 14 02:38:36 localhost systemd[1]: Started Journal Service. Oct 14 02:38:36 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Oct 14 02:38:36 localhost systemd[1]: Mounted Kernel Debug File System. Oct 14 02:38:36 localhost kernel: ACPI: bus type drm_connector registered Oct 14 02:38:36 localhost systemd[1]: Mounted Kernel Trace File System. Oct 14 02:38:36 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 14 02:38:36 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 14 02:38:36 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Finished Load Kernel Module drm. Oct 14 02:38:36 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Finished Load Kernel Module fuse. Oct 14 02:38:36 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Oct 14 02:38:36 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 02:38:36 localhost systemd[1]: Finished Generate network units from Kernel command line. Oct 14 02:38:36 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Oct 14 02:38:36 localhost systemd[1]: Mounting FUSE Control File System... Oct 14 02:38:36 localhost systemd[1]: Mounting Kernel Configuration File System... Oct 14 02:38:36 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 14 02:38:36 localhost systemd[1]: Starting Rebuild Hardware Database... Oct 14 02:38:36 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Oct 14 02:38:36 localhost systemd[1]: Starting Load/Save Random Seed... Oct 14 02:38:36 localhost systemd[1]: Starting Apply Kernel Variables... Oct 14 02:38:36 localhost systemd[1]: Starting Create System Users... Oct 14 02:38:36 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/8e1d5208cffec42b50976967e1d1cfd0) is 8.0M, max 314.7M, 306.7M free. Oct 14 02:38:36 localhost systemd-journald[618]: Received client request to flush runtime journal. Oct 14 02:38:36 localhost systemd[1]: Mounted FUSE Control File System. Oct 14 02:38:36 localhost systemd[1]: Mounted Kernel Configuration File System. Oct 14 02:38:36 localhost systemd[1]: Finished Load/Save Random Seed. Oct 14 02:38:36 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 14 02:38:36 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Oct 14 02:38:36 localhost systemd[1]: Finished Apply Kernel Variables. Oct 14 02:38:36 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 14 02:38:36 localhost systemd-sysusers[632]: Creating group 'sgx' with GID 989. Oct 14 02:38:36 localhost systemd-sysusers[632]: Creating group 'systemd-oom' with GID 988. Oct 14 02:38:36 localhost systemd-sysusers[632]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Oct 14 02:38:36 localhost systemd[1]: Finished Create System Users. Oct 14 02:38:36 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 14 02:38:36 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 14 02:38:36 localhost systemd[1]: Reached target Preparation for Local File Systems. Oct 14 02:38:36 localhost systemd[1]: Set up automount EFI System Partition Automount. Oct 14 02:38:36 localhost systemd[1]: Finished Rebuild Hardware Database. Oct 14 02:38:36 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 14 02:38:36 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'. Oct 14 02:38:36 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 14 02:38:36 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 14 02:38:36 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 14 02:38:36 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 14 02:38:36 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Oct 14 02:38:36 localhost systemd-udevd[641]: Network interface NamePolicy= disabled on kernel command line. Oct 14 02:38:36 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Oct 14 02:38:36 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Oct 14 02:38:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Oct 14 02:38:36 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Oct 14 02:38:36 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 14 02:38:36 localhost systemd-fsck[680]: fsck.fat 4.2 (2021-01-31) Oct 14 02:38:36 localhost systemd-fsck[680]: /dev/vda2: 12 files, 1782/51145 clusters Oct 14 02:38:36 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Oct 14 02:38:36 localhost kernel: SVM: TSC scaling supported Oct 14 02:38:36 localhost kernel: kvm: Nested Virtualization enabled Oct 14 02:38:36 localhost kernel: SVM: kvm: Nested Paging enabled Oct 14 02:38:36 localhost kernel: SVM: LBR virtualization supported Oct 14 02:38:36 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 14 02:38:36 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 14 02:38:36 localhost kernel: Console: switching to colour dummy device 80x25 Oct 14 02:38:36 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 14 02:38:36 localhost kernel: [drm] features: -context_init Oct 14 02:38:36 localhost kernel: [drm] number of scanouts: 1 Oct 14 02:38:36 localhost kernel: [drm] number of cap sets: 0 Oct 14 02:38:36 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Oct 14 02:38:36 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Oct 14 02:38:36 localhost kernel: Console: switching to colour frame buffer device 128x48 Oct 14 02:38:36 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 14 02:38:37 localhost systemd[1]: Mounting /boot... Oct 14 02:38:37 localhost kernel: XFS (vda3): Mounting V5 Filesystem Oct 14 02:38:37 localhost kernel: XFS (vda3): Ending clean mount Oct 14 02:38:37 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Oct 14 02:38:37 localhost systemd[1]: Mounted /boot. Oct 14 02:38:37 localhost systemd[1]: Mounting /boot/efi... Oct 14 02:38:37 localhost systemd[1]: Mounted /boot/efi. Oct 14 02:38:37 localhost systemd[1]: Reached target Local File Systems. Oct 14 02:38:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Oct 14 02:38:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Oct 14 02:38:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 14 02:38:37 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 14 02:38:37 localhost systemd[1]: Starting Automatic Boot Loader Update... Oct 14 02:38:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Oct 14 02:38:37 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 14 02:38:37 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 717 (bootctl) Oct 14 02:38:37 localhost systemd[1]: Starting File System Check on /dev/vda2... Oct 14 02:38:37 localhost systemd[1]: Finished File System Check on /dev/vda2. Oct 14 02:38:37 localhost systemd[1]: Mounting EFI System Partition Automount... Oct 14 02:38:37 localhost systemd[1]: Mounted EFI System Partition Automount. Oct 14 02:38:37 localhost systemd[1]: Finished Automatic Boot Loader Update. Oct 14 02:38:37 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 14 02:38:37 localhost systemd[1]: Starting Security Auditing Service... Oct 14 02:38:37 localhost systemd[1]: Starting RPC Bind... Oct 14 02:38:37 localhost systemd[1]: Starting Rebuild Journal Catalog... Oct 14 02:38:37 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Oct 14 02:38:37 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable) Oct 14 02:38:37 localhost systemd[1]: Finished Rebuild Journal Catalog. Oct 14 02:38:37 localhost systemd[1]: Started RPC Bind. Oct 14 02:38:37 localhost augenrules[731]: /sbin/augenrules: No change Oct 14 02:38:37 localhost augenrules[741]: No rules Oct 14 02:38:37 localhost augenrules[741]: enabled 1 Oct 14 02:38:37 localhost augenrules[741]: failure 1 Oct 14 02:38:37 localhost augenrules[741]: pid 726 Oct 14 02:38:37 localhost augenrules[741]: rate_limit 0 Oct 14 02:38:37 localhost augenrules[741]: backlog_limit 8192 Oct 14 02:38:37 localhost augenrules[741]: lost 0 Oct 14 02:38:37 localhost augenrules[741]: backlog 0 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time 60000 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 14 02:38:37 localhost augenrules[741]: enabled 1 Oct 14 02:38:37 localhost augenrules[741]: failure 1 Oct 14 02:38:37 localhost augenrules[741]: pid 726 Oct 14 02:38:37 localhost augenrules[741]: rate_limit 0 Oct 14 02:38:37 localhost augenrules[741]: backlog_limit 8192 Oct 14 02:38:37 localhost augenrules[741]: lost 0 Oct 14 02:38:37 localhost augenrules[741]: backlog 0 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time 60000 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 14 02:38:37 localhost augenrules[741]: enabled 1 Oct 14 02:38:37 localhost augenrules[741]: failure 1 Oct 14 02:38:37 localhost augenrules[741]: pid 726 Oct 14 02:38:37 localhost augenrules[741]: rate_limit 0 Oct 14 02:38:37 localhost augenrules[741]: backlog_limit 8192 Oct 14 02:38:37 localhost augenrules[741]: lost 0 Oct 14 02:38:37 localhost augenrules[741]: backlog 3 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time 60000 Oct 14 02:38:37 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 14 02:38:37 localhost systemd[1]: Started Security Auditing Service. Oct 14 02:38:37 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Oct 14 02:38:37 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Oct 14 02:38:37 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Oct 14 02:38:37 localhost systemd[1]: Starting Update is Completed... Oct 14 02:38:37 localhost systemd[1]: Finished Update is Completed. Oct 14 02:38:37 localhost systemd[1]: Reached target System Initialization. Oct 14 02:38:37 localhost systemd[1]: Started dnf makecache --timer. Oct 14 02:38:37 localhost systemd[1]: Started Daily rotation of log files. Oct 14 02:38:37 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Oct 14 02:38:37 localhost systemd[1]: Reached target Timer Units. Oct 14 02:38:37 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 14 02:38:37 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Oct 14 02:38:37 localhost systemd[1]: Reached target Socket Units. Oct 14 02:38:37 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Oct 14 02:38:37 localhost systemd[1]: Starting D-Bus System Message Bus... Oct 14 02:38:37 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 14 02:38:37 localhost systemd[1]: Started D-Bus System Message Bus. Oct 14 02:38:37 localhost systemd[1]: Reached target Basic System. Oct 14 02:38:37 localhost journal[751]: Ready Oct 14 02:38:37 localhost systemd[1]: Starting NTP client/server... Oct 14 02:38:37 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Oct 14 02:38:37 localhost systemd[1]: Started irqbalance daemon. Oct 14 02:38:37 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Oct 14 02:38:37 localhost systemd[1]: Starting System Logging Service... Oct 14 02:38:37 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 02:38:37 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 02:38:37 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 02:38:37 localhost systemd[1]: Reached target sshd-keygen.target. Oct 14 02:38:37 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Oct 14 02:38:37 localhost systemd[1]: Reached target User and Group Name Lookups. Oct 14 02:38:37 localhost systemd[1]: Starting User Login Management... Oct 14 02:38:37 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Oct 14 02:38:37 localhost systemd[1]: Started System Logging Service. Oct 14 02:38:37 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Oct 14 02:38:37 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Oct 14 02:38:37 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 14 02:38:37 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Oct 14 02:38:37 localhost chronyd[766]: Loaded seccomp filter (level 2) Oct 14 02:38:37 localhost systemd[1]: Started NTP client/server. Oct 14 02:38:37 localhost systemd-logind[760]: New seat seat0. Oct 14 02:38:37 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 14 02:38:37 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 14 02:38:37 localhost systemd[1]: Started User Login Management. Oct 14 02:38:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 02:38:38 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Tue, 14 Oct 2025 06:38:38 +0000. Up 6.34 seconds. Oct 14 02:38:38 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpma0vl2ca.mount: Deactivated successfully. Oct 14 02:38:38 localhost systemd[1]: Starting Hostname Service... Oct 14 02:38:38 localhost systemd[1]: Started Hostname Service. Oct 14 02:38:38 localhost systemd-hostnamed[784]: Hostname set to (static) Oct 14 02:38:38 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Oct 14 02:38:38 localhost systemd[1]: Reached target Preparation for Network. Oct 14 02:38:38 localhost systemd[1]: Starting Network Manager... Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.6861] NetworkManager (version 1.42.2-1.el9) is starting... (boot:04c98b47-43c2-4625-a59c-ba886a1b7e92) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.6867] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 14 02:38:38 localhost systemd[1]: Started Network Manager. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.6908] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 14 02:38:38 localhost systemd[1]: Reached target Network. Oct 14 02:38:38 localhost systemd[1]: Starting Network Manager Wait Online... Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7004] manager[0x55df23f64020]: monitoring kernel firmware directory '/lib/firmware'. Oct 14 02:38:38 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7062] hostname: hostname: using hostnamed Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7062] hostname: static hostname changed from (none) to "np0005486731.novalocal" Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7076] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 14 02:38:38 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Oct 14 02:38:38 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 14 02:38:38 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Oct 14 02:38:38 localhost systemd[1]: Started GSSAPI Proxy Daemon. Oct 14 02:38:38 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 14 02:38:38 localhost systemd[1]: Reached target NFS client services. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7266] manager[0x55df23f64020]: rfkill: Wi-Fi hardware radio set enabled Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7267] manager[0x55df23f64020]: rfkill: WWAN hardware radio set enabled Oct 14 02:38:38 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 14 02:38:38 localhost systemd[1]: Reached target Remote File Systems. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7337] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7338] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7345] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7346] manager: Networking is enabled by state file Oct 14 02:38:38 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7500] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7501] settings: Loaded settings plugin: keyfile (internal) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7537] dhcp: init: Using DHCP client 'internal' Oct 14 02:38:38 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7541] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7561] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7568] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7582] device (lo): Activation: starting connection 'lo' (418506c9-22d0-4fb3-8d96-5d3137cafe64) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7595] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7600] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 14 02:38:38 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7646] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7651] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7654] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7657] device (eth0): carrier: link connected Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7661] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7669] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7680] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7687] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7688] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7692] manager: NetworkManager state is now CONNECTING Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7696] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7706] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7711] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 14 02:38:38 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7850] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7853] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.7861] device (lo): Activation: successful, device activated. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8041] dhcp4 (eth0): state changed new lease, address=38.102.83.104 Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8047] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8078] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8099] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8101] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8106] manager: NetworkManager state is now CONNECTED_SITE Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8110] device (eth0): Activation: successful, device activated. Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8117] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 14 02:38:38 localhost NetworkManager[789]: [1760423918.8122] manager: startup complete Oct 14 02:38:38 localhost systemd[1]: Finished Network Manager Wait Online. Oct 14 02:38:38 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Oct 14 02:38:39 localhost cloud-init[934]: Cloud-init v. 22.1-9.el9 running 'init' at Tue, 14 Oct 2025 06:38:39 +0000. Up 7.26 seconds. Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | eth0 | True | 38.102.83.104 | 255.255.255.0 | global | fa:16:3e:c9:f0:cc | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | eth0 | True | fe80::f816:3eff:fec9:f0cc/64 | . | link | fa:16:3e:c9:f0:cc | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | lo | True | ::1/128 | . | host | . | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | Route | Destination | Gateway | Interface | Flags | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: | 3 | multicast | :: | eth0 | U | Oct 14 02:38:39 localhost cloud-init[934]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 14 02:38:39 localhost systemd[1]: Starting Authorization Manager... Oct 14 02:38:39 localhost polkitd[1034]: Started polkitd version 0.117 Oct 14 02:38:39 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 14 02:38:39 localhost systemd[1]: Started Authorization Manager. Oct 14 02:38:41 localhost cloud-init[934]: Generating public/private rsa key pair. Oct 14 02:38:41 localhost cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Oct 14 02:38:41 localhost cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Oct 14 02:38:41 localhost cloud-init[934]: The key fingerprint is: Oct 14 02:38:41 localhost cloud-init[934]: SHA256:OmnGHo/JN29ogp0gQ6uee5a4IDKc77SbJCjTFXa/VoU root@np0005486731.novalocal Oct 14 02:38:41 localhost cloud-init[934]: The key's randomart image is: Oct 14 02:38:41 localhost cloud-init[934]: +---[RSA 3072]----+ Oct 14 02:38:41 localhost cloud-init[934]: | | Oct 14 02:38:41 localhost cloud-init[934]: | . | Oct 14 02:38:41 localhost cloud-init[934]: | o . E . | Oct 14 02:38:41 localhost cloud-init[934]: | .. o . . | Oct 14 02:38:41 localhost cloud-init[934]: | . .. S . | Oct 14 02:38:41 localhost cloud-init[934]: |o.=... o o | Oct 14 02:38:41 localhost cloud-init[934]: |O=o=.+O.o. | Oct 14 02:38:41 localhost cloud-init[934]: |*+*++=+B= . | Oct 14 02:38:41 localhost cloud-init[934]: |.=**. =+.+. | Oct 14 02:38:41 localhost cloud-init[934]: +----[SHA256]-----+ Oct 14 02:38:41 localhost cloud-init[934]: Generating public/private ecdsa key pair. Oct 14 02:38:41 localhost cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Oct 14 02:38:41 localhost cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Oct 14 02:38:41 localhost cloud-init[934]: The key fingerprint is: Oct 14 02:38:41 localhost cloud-init[934]: SHA256:FhaByXUlk97KqgAlIsH03Kh0OEPa9ywZkzee7XbjMvQ root@np0005486731.novalocal Oct 14 02:38:41 localhost cloud-init[934]: The key's randomart image is: Oct 14 02:38:41 localhost cloud-init[934]: +---[ECDSA 256]---+ Oct 14 02:38:41 localhost cloud-init[934]: |+o . ++.+o. | Oct 14 02:38:41 localhost cloud-init[934]: |o++ o.+ o.o | Oct 14 02:38:41 localhost cloud-init[934]: |+=o=*.o o. . | Oct 14 02:38:41 localhost cloud-init[934]: |o.=+ O = .. . | Oct 14 02:38:41 localhost cloud-init[934]: | .. o = S. . | Oct 14 02:38:41 localhost cloud-init[934]: | . . o. o | Oct 14 02:38:41 localhost cloud-init[934]: | . .ooo | Oct 14 02:38:41 localhost cloud-init[934]: | . .+oE. | Oct 14 02:38:41 localhost cloud-init[934]: | .. o. | Oct 14 02:38:41 localhost cloud-init[934]: +----[SHA256]-----+ Oct 14 02:38:41 localhost cloud-init[934]: Generating public/private ed25519 key pair. Oct 14 02:38:41 localhost cloud-init[934]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Oct 14 02:38:41 localhost cloud-init[934]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Oct 14 02:38:41 localhost cloud-init[934]: The key fingerprint is: Oct 14 02:38:41 localhost cloud-init[934]: SHA256:q3RS6PT/KeqO0X3Ww3nPqMeCg0lwFsSyz/UFmPyYAEU root@np0005486731.novalocal Oct 14 02:38:41 localhost cloud-init[934]: The key's randomart image is: Oct 14 02:38:41 localhost cloud-init[934]: +--[ED25519 256]--+ Oct 14 02:38:41 localhost cloud-init[934]: | .*E. o | Oct 14 02:38:41 localhost cloud-init[934]: | . + + . | Oct 14 02:38:41 localhost cloud-init[934]: | o o + . | Oct 14 02:38:41 localhost cloud-init[934]: | o.o + . . | Oct 14 02:38:41 localhost cloud-init[934]: | o*S. . . | Oct 14 02:38:41 localhost cloud-init[934]: | o ++o .o . | Oct 14 02:38:41 localhost cloud-init[934]: | =.=o..o.= .| Oct 14 02:38:41 localhost cloud-init[934]: | . *o.+o..o=.| Oct 14 02:38:41 localhost cloud-init[934]: | oo+..+++. o| Oct 14 02:38:41 localhost cloud-init[934]: +----[SHA256]-----+ Oct 14 02:38:41 localhost sm-notify[1130]: Version 2.5.4 starting Oct 14 02:38:41 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Oct 14 02:38:41 localhost systemd[1]: Reached target Cloud-config availability. Oct 14 02:38:41 localhost systemd[1]: Reached target Network is Online. Oct 14 02:38:41 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Oct 14 02:38:41 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Oct 14 02:38:41 localhost systemd[1]: Starting Crash recovery kernel arming... Oct 14 02:38:41 localhost systemd[1]: Starting Notify NFS peers of a restart... Oct 14 02:38:41 localhost systemd[1]: Starting OpenSSH server daemon... Oct 14 02:38:41 localhost systemd[1]: Starting Permit User Sessions... Oct 14 02:38:41 localhost systemd[1]: Started Notify NFS peers of a restart. Oct 14 02:38:41 localhost systemd[1]: Finished Permit User Sessions. Oct 14 02:38:41 localhost sshd[1131]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:38:41 localhost systemd[1]: Started Command Scheduler. Oct 14 02:38:41 localhost systemd[1]: Started Getty on tty1. Oct 14 02:38:41 localhost systemd[1]: Started Serial Getty on ttyS0. Oct 14 02:38:41 localhost systemd[1]: Reached target Login Prompts. Oct 14 02:38:41 localhost systemd[1]: Started OpenSSH server daemon. Oct 14 02:38:41 localhost systemd[1]: Reached target Multi-User System. Oct 14 02:38:41 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Oct 14 02:38:41 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 14 02:38:41 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Oct 14 02:38:42 localhost kdumpctl[1135]: kdump: No kdump initial ramdisk found. Oct 14 02:38:42 localhost kdumpctl[1135]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Oct 14 02:38:42 localhost cloud-init[1251]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Tue, 14 Oct 2025 06:38:42 +0000. Up 10.30 seconds. Oct 14 02:38:42 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Oct 14 02:38:42 localhost systemd[1]: Starting Execute cloud user/final scripts... Oct 14 02:38:42 localhost cloud-init[1417]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Tue, 14 Oct 2025 06:38:42 +0000. Up 10.62 seconds. Oct 14 02:38:42 localhost dracut[1419]: dracut-057-21.git20230214.el9 Oct 14 02:38:42 localhost cloud-init[1436]: ############################################################# Oct 14 02:38:42 localhost cloud-init[1437]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Oct 14 02:38:42 localhost cloud-init[1439]: 256 SHA256:FhaByXUlk97KqgAlIsH03Kh0OEPa9ywZkzee7XbjMvQ root@np0005486731.novalocal (ECDSA) Oct 14 02:38:42 localhost cloud-init[1441]: 256 SHA256:q3RS6PT/KeqO0X3Ww3nPqMeCg0lwFsSyz/UFmPyYAEU root@np0005486731.novalocal (ED25519) Oct 14 02:38:42 localhost cloud-init[1443]: 3072 SHA256:OmnGHo/JN29ogp0gQ6uee5a4IDKc77SbJCjTFXa/VoU root@np0005486731.novalocal (RSA) Oct 14 02:38:42 localhost cloud-init[1444]: -----END SSH HOST KEY FINGERPRINTS----- Oct 14 02:38:42 localhost cloud-init[1446]: ############################################################# Oct 14 02:38:42 localhost dracut[1421]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Oct 14 02:38:42 localhost cloud-init[1417]: Cloud-init v. 22.1-9.el9 finished at Tue, 14 Oct 2025 06:38:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 10.87 seconds Oct 14 02:38:42 localhost systemd[1]: Reloading Network Manager... Oct 14 02:38:42 localhost NetworkManager[789]: [1760423922.7437] audit: op="reload" arg="0" pid=1536 uid=0 result="success" Oct 14 02:38:42 localhost NetworkManager[789]: [1760423922.7443] config: signal: SIGHUP (no changes from disk) Oct 14 02:38:42 localhost systemd[1]: Reloaded Network Manager. Oct 14 02:38:42 localhost systemd[1]: Finished Execute cloud user/final scripts. Oct 14 02:38:42 localhost systemd[1]: Reached target Cloud-init target. Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 14 02:38:42 localhost dracut[1421]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:42 localhost dracut[1421]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 14 02:38:43 localhost dracut[1421]: memstrack is not available Oct 14 02:38:43 localhost dracut[1421]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 14 02:38:43 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 14 02:38:43 localhost dracut[1421]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 14 02:38:43 localhost dracut[1421]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 14 02:38:43 localhost dracut[1421]: memstrack is not available Oct 14 02:38:43 localhost dracut[1421]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 14 02:38:43 localhost chronyd[766]: Selected source 162.159.200.123 (2.rhel.pool.ntp.org) Oct 14 02:38:44 localhost chronyd[766]: System clock wrong by 1.049306 seconds Oct 14 02:38:44 localhost chronyd[766]: System clock was stepped by 1.049306 seconds Oct 14 02:38:44 localhost chronyd[766]: System clock TAI offset set to 37 seconds Oct 14 02:38:44 localhost dracut[1421]: *** Including module: systemd *** Oct 14 02:38:45 localhost dracut[1421]: *** Including module: systemd-initrd *** Oct 14 02:38:45 localhost dracut[1421]: *** Including module: i18n *** Oct 14 02:38:45 localhost dracut[1421]: No KEYMAP configured. Oct 14 02:38:45 localhost dracut[1421]: *** Including module: drm *** Oct 14 02:38:45 localhost dracut[1421]: *** Including module: prefixdevname *** Oct 14 02:38:45 localhost dracut[1421]: *** Including module: kernel-modules *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: kernel-modules-extra *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: qemu *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: fstab-sys *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: rootfs-block *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: terminfo *** Oct 14 02:38:46 localhost dracut[1421]: *** Including module: udev-rules *** Oct 14 02:38:47 localhost dracut[1421]: Skipping udev rule: 91-permissions.rules Oct 14 02:38:47 localhost dracut[1421]: Skipping udev rule: 80-drivers-modprobe.rules Oct 14 02:38:47 localhost dracut[1421]: *** Including module: virtiofs *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: dracut-systemd *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: usrmount *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: base *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: fs-lib *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: kdumpbase *** Oct 14 02:38:47 localhost dracut[1421]: *** Including module: microcode_ctl-fw_dir_override *** Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl module: mangling fw_dir Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-2d-07" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-4e-03" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-4f-01" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-55-04" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-5e-03" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8c-01" is ignored Oct 14 02:38:47 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Oct 14 02:38:48 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Oct 14 02:38:48 localhost dracut[1421]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Oct 14 02:38:48 localhost dracut[1421]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Oct 14 02:38:48 localhost dracut[1421]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Oct 14 02:38:48 localhost dracut[1421]: *** Including module: shutdown *** Oct 14 02:38:48 localhost dracut[1421]: *** Including module: squash *** Oct 14 02:38:48 localhost dracut[1421]: *** Including modules done *** Oct 14 02:38:48 localhost dracut[1421]: *** Installing kernel module dependencies *** Oct 14 02:38:48 localhost dracut[1421]: *** Installing kernel module dependencies done *** Oct 14 02:38:48 localhost dracut[1421]: *** Resolving executable dependencies *** Oct 14 02:38:49 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 14 02:38:50 localhost dracut[1421]: *** Resolving executable dependencies done *** Oct 14 02:38:50 localhost dracut[1421]: *** Hardlinking files *** Oct 14 02:38:50 localhost dracut[1421]: Mode: real Oct 14 02:38:50 localhost dracut[1421]: Files: 1099 Oct 14 02:38:50 localhost dracut[1421]: Linked: 3 files Oct 14 02:38:50 localhost dracut[1421]: Compared: 0 xattrs Oct 14 02:38:50 localhost dracut[1421]: Compared: 373 files Oct 14 02:38:50 localhost dracut[1421]: Saved: 61.04 KiB Oct 14 02:38:50 localhost dracut[1421]: Duration: 0.049427 seconds Oct 14 02:38:50 localhost dracut[1421]: *** Hardlinking files done *** Oct 14 02:38:50 localhost dracut[1421]: Could not find 'strip'. Not stripping the initramfs. Oct 14 02:38:50 localhost dracut[1421]: *** Generating early-microcode cpio image *** Oct 14 02:38:50 localhost dracut[1421]: *** Constructing AuthenticAMD.bin *** Oct 14 02:38:50 localhost dracut[1421]: *** Store current command line parameters *** Oct 14 02:38:50 localhost dracut[1421]: Stored kernel commandline: Oct 14 02:38:50 localhost dracut[1421]: No dracut internal kernel commandline stored in the initramfs Oct 14 02:38:50 localhost dracut[1421]: *** Install squash loader *** Oct 14 02:38:51 localhost dracut[1421]: *** Squashing the files inside the initramfs *** Oct 14 02:38:51 localhost dracut[1421]: *** Squashing the files inside the initramfs done *** Oct 14 02:38:51 localhost dracut[1421]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Oct 14 02:38:52 localhost dracut[1421]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Oct 14 02:38:52 localhost kdumpctl[1135]: kdump: kexec: loaded kdump kernel Oct 14 02:38:52 localhost kdumpctl[1135]: kdump: Starting kdump: [OK] Oct 14 02:38:52 localhost systemd[1]: Finished Crash recovery kernel arming. Oct 14 02:38:52 localhost systemd[1]: Startup finished in 1.226s (kernel) + 2.161s (initrd) + 16.401s (userspace) = 19.789s. Oct 14 02:39:09 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 14 02:39:16 localhost sshd[4156]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4158]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4160]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4162]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4164]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4166]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4168]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4170]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:39:16 localhost sshd[4172]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:41:19 localhost systemd[1]: Unmounting EFI System Partition Automount... Oct 14 02:41:19 localhost systemd[1]: efi.mount: Deactivated successfully. Oct 14 02:41:19 localhost systemd[1]: Unmounted EFI System Partition Automount. Oct 14 02:48:17 localhost sshd[4180]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:49:37 localhost sshd[4183]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:52:10 localhost sshd[4186]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:52:11 localhost systemd[1]: Created slice User Slice of UID 1000. Oct 14 02:52:11 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Oct 14 02:52:11 localhost systemd-logind[760]: New session 1 of user zuul. Oct 14 02:52:11 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Oct 14 02:52:11 localhost systemd[1]: Starting User Manager for UID 1000... Oct 14 02:52:11 localhost systemd[4190]: Queued start job for default target Main User Target. Oct 14 02:52:11 localhost systemd[4190]: Created slice User Application Slice. Oct 14 02:52:11 localhost systemd[4190]: Started Mark boot as successful after the user session has run 2 minutes. Oct 14 02:52:11 localhost systemd[4190]: Started Daily Cleanup of User's Temporary Directories. Oct 14 02:52:11 localhost systemd[4190]: Reached target Paths. Oct 14 02:52:11 localhost systemd[4190]: Reached target Timers. Oct 14 02:52:11 localhost systemd[4190]: Starting D-Bus User Message Bus Socket... Oct 14 02:52:11 localhost systemd[4190]: Starting Create User's Volatile Files and Directories... Oct 14 02:52:11 localhost systemd[4190]: Listening on D-Bus User Message Bus Socket. Oct 14 02:52:11 localhost systemd[4190]: Reached target Sockets. Oct 14 02:52:11 localhost systemd[4190]: Finished Create User's Volatile Files and Directories. Oct 14 02:52:11 localhost systemd[4190]: Reached target Basic System. Oct 14 02:52:11 localhost systemd[4190]: Reached target Main User Target. Oct 14 02:52:11 localhost systemd[4190]: Startup finished in 113ms. Oct 14 02:52:11 localhost systemd[1]: Started User Manager for UID 1000. Oct 14 02:52:11 localhost systemd[1]: Started Session 1 of User zuul. Oct 14 02:52:11 localhost python3[4243]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 02:52:22 localhost python3[4261]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 02:52:31 localhost python3[4313]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 02:52:33 localhost python3[4343]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Oct 14 02:52:36 localhost python3[4359]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:52:36 localhost python3[4373]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:37 localhost python3[4432]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:52:38 localhost python3[4473]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760424757.655137-389-267157505424711/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6c04c38dfe8e446399a5e5f9dbe4740b_id_rsa follow=False checksum=ca0549f1043aa781cfe5001a3649a4105abf4f82 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:39 localhost python3[4546]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:52:39 localhost python3[4587]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760424759.2494082-488-228785318298780/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6c04c38dfe8e446399a5e5f9dbe4740b_id_rsa.pub follow=False checksum=8b573aa2906c160b2f7b53c64bd37790afdd4394 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:41 localhost python3[4615]: ansible-ping Invoked with data=pong Oct 14 02:52:43 localhost python3[4629]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 02:52:47 localhost python3[4681]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Oct 14 02:52:50 localhost python3[4703]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:50 localhost python3[4717]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:50 localhost python3[4731]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:51 localhost python3[4745]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:52 localhost python3[4759]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:52 localhost python3[4773]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:54 localhost python3[4789]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:52:56 localhost python3[4837]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:52:56 localhost python3[4880]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760424775.9611428-96-74949627757263/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:03 localhost python3[4908]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:03 localhost python3[4922]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:04 localhost python3[4936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:04 localhost python3[4950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:04 localhost python3[4964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:04 localhost python3[4978]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:04 localhost python3[4992]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:05 localhost python3[5006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:05 localhost python3[5020]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:05 localhost python3[5034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:06 localhost python3[5048]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:06 localhost python3[5062]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:06 localhost python3[5076]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:06 localhost python3[5090]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:07 localhost python3[5104]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:07 localhost python3[5118]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:07 localhost python3[5132]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:07 localhost python3[5146]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:08 localhost python3[5160]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:08 localhost python3[5174]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:08 localhost python3[5188]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:09 localhost python3[5202]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:09 localhost python3[5216]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:09 localhost python3[5230]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:09 localhost python3[5244]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:10 localhost python3[5258]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 02:53:11 localhost python3[5274]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 14 02:53:12 localhost systemd[1]: Starting Time & Date Service... Oct 14 02:53:12 localhost systemd[1]: Started Time & Date Service. Oct 14 02:53:12 localhost systemd-timedated[5276]: Changed time zone to 'UTC' (UTC). Oct 14 02:53:13 localhost python3[5295]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:14 localhost python3[5341]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:53:15 localhost python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760424794.7021816-491-164486772309070/source _original_basename=tmpexur9isb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:16 localhost python3[5442]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:53:16 localhost python3[5483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760424796.261715-582-175193814660620/source _original_basename=tmplts_yuo1 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:18 localhost python3[5545]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:53:18 localhost python3[5588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760424798.3385384-725-63793871391402/source _original_basename=tmpw0glw_2e follow=False checksum=c5c0705803ad624a8ffce4830305d6af54e0033e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:20 localhost python3[5616]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 02:53:20 localhost python3[5632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 02:53:21 localhost python3[5682]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:53:21 localhost python3[5725]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760424801.2841933-851-167923481909934/source _original_basename=tmpzolqu76n follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:22 localhost python3[5756]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-51fb-2668-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 02:53:24 localhost python3[5774]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-51fb-2668-000000000024-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Oct 14 02:53:25 localhost python3[5792]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:53:42 localhost systemd[1]: Starting Cleanup of Temporary Directories... Oct 14 02:53:42 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 14 02:53:42 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Oct 14 02:53:42 localhost systemd[1]: Finished Cleanup of Temporary Directories. Oct 14 02:53:42 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Oct 14 02:53:44 localhost python3[5813]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:54:22 localhost systemd[4190]: Starting Mark boot as successful... Oct 14 02:54:22 localhost systemd[4190]: Finished Mark boot as successful. Oct 14 02:54:44 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Oct 14 02:57:02 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Oct 14 02:57:02 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6562] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 14 02:57:02 localhost systemd-udevd[5818]: Network interface NamePolicy= disabled on kernel command line. Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6692] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6723] settings: (eth1): created default wired connection 'Wired connection 1' Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6726] device (eth1): carrier: link connected Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6729] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6734] policy: auto-activating connection 'Wired connection 1' (46f2cb85-b970-3000-9e7a-7a8cd2bcbc00) Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6740] device (eth1): Activation: starting connection 'Wired connection 1' (46f2cb85-b970-3000-9e7a-7a8cd2bcbc00) Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6741] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6744] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6749] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 14 02:57:02 localhost NetworkManager[789]: [1760425022.6752] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 14 02:57:03 localhost sshd[5820]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:57:03 localhost systemd-logind[760]: New session 3 of user zuul. Oct 14 02:57:03 localhost systemd[1]: Started Session 3 of User zuul. Oct 14 02:57:03 localhost python3[5837]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-99d4-3d1d-00000000039b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 02:57:03 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Oct 14 02:57:16 localhost python3[5887]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:57:17 localhost python3[5930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760425036.5036976-435-197271883723857/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b8f9aac995784f1718a3bbc937ae88a2f2e3b1a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:57:17 localhost python3[5960]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 02:57:17 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Oct 14 02:57:17 localhost systemd[1]: Stopped Network Manager Wait Online. Oct 14 02:57:17 localhost systemd[1]: Stopping Network Manager Wait Online... Oct 14 02:57:17 localhost systemd[1]: Stopping Network Manager... Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7375] caught SIGTERM, shutting down normally. Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7468] dhcp4 (eth0): canceled DHCP transaction Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7469] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7469] dhcp4 (eth0): state changed no lease Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7476] manager: NetworkManager state is now CONNECTING Oct 14 02:57:17 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7588] dhcp4 (eth1): canceled DHCP transaction Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7589] dhcp4 (eth1): state changed no lease Oct 14 02:57:17 localhost NetworkManager[789]: [1760425037.7646] exiting (success) Oct 14 02:57:17 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 14 02:57:17 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Oct 14 02:57:17 localhost systemd[1]: Stopped Network Manager. Oct 14 02:57:17 localhost systemd[1]: NetworkManager.service: Consumed 4.810s CPU time. Oct 14 02:57:17 localhost systemd[1]: Starting Network Manager... Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.8118] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:04c98b47-43c2-4625-a59c-ba886a1b7e92) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.8121] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.8137] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 14 02:57:17 localhost systemd[1]: Started Network Manager. Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.8186] manager[0x561ddc159090]: monitoring kernel firmware directory '/lib/firmware'. Oct 14 02:57:17 localhost systemd[1]: Starting Network Manager Wait Online... Oct 14 02:57:17 localhost systemd[1]: Starting Hostname Service... Oct 14 02:57:17 localhost systemd[1]: Started Hostname Service. Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9060] hostname: hostname: using hostnamed Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9061] hostname: static hostname changed from (none) to "np0005486731.novalocal" Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9067] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9073] manager[0x561ddc159090]: rfkill: Wi-Fi hardware radio set enabled Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9074] manager[0x561ddc159090]: rfkill: WWAN hardware radio set enabled Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9116] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9117] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9118] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9119] manager: Networking is enabled by state file Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9128] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9128] settings: Loaded settings plugin: keyfile (internal) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9176] dhcp: init: Using DHCP client 'internal' Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9181] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9189] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9197] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9210] device (lo): Activation: starting connection 'lo' (418506c9-22d0-4fb3-8d96-5d3137cafe64) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9220] device (eth0): carrier: link connected Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9226] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9233] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9234] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9243] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9253] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9264] device (eth1): carrier: link connected Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9270] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9278] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (46f2cb85-b970-3000-9e7a-7a8cd2bcbc00) (indicated) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9278] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9286] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9297] device (eth1): Activation: starting connection 'Wired connection 1' (46f2cb85-b970-3000-9e7a-7a8cd2bcbc00) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9329] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9332] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9335] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9339] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9345] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9348] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9353] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9356] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9372] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9379] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9396] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9402] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9511] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9519] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9531] device (lo): Activation: successful, device activated. Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9547] dhcp4 (eth0): state changed new lease, address=38.102.83.104 Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9556] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9685] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9724] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9727] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9733] manager: NetworkManager state is now CONNECTED_SITE Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9737] device (eth0): Activation: successful, device activated. Oct 14 02:57:17 localhost NetworkManager[5972]: [1760425037.9744] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 14 02:57:18 localhost python3[6026]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-99d4-3d1d-000000000120-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 02:57:22 localhost systemd[4190]: Created slice User Background Tasks Slice. Oct 14 02:57:22 localhost systemd[4190]: Starting Cleanup of User's Temporary Files and Directories... Oct 14 02:57:22 localhost systemd[4190]: Finished Cleanup of User's Temporary Files and Directories. Oct 14 02:57:28 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 14 02:57:47 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 14 02:58:02 localhost NetworkManager[5972]: [1760425082.8249] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 14 02:58:02 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 14 02:58:02 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 14 02:58:02 localhost NetworkManager[5972]: [1760425082.8469] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 14 02:58:02 localhost NetworkManager[5972]: [1760425082.8472] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 14 02:58:02 localhost NetworkManager[5972]: [1760425082.8480] device (eth1): Activation: successful, device activated. Oct 14 02:58:02 localhost NetworkManager[5972]: [1760425082.8487] manager: startup complete Oct 14 02:58:02 localhost systemd[1]: Finished Network Manager Wait Online. Oct 14 02:58:12 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 14 02:58:18 localhost systemd[1]: session-3.scope: Deactivated successfully. Oct 14 02:58:18 localhost systemd[1]: session-3.scope: Consumed 1.442s CPU time. Oct 14 02:58:18 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Oct 14 02:58:18 localhost systemd-logind[760]: Removed session 3. Oct 14 02:59:26 localhost sshd[6062]: main: sshd: ssh-rsa algorithm is disabled Oct 14 02:59:26 localhost systemd-logind[760]: New session 4 of user zuul. Oct 14 02:59:26 localhost systemd[1]: Started Session 4 of User zuul. Oct 14 02:59:27 localhost python3[6113]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 02:59:27 localhost python3[6156]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760425166.8341-628-227519388038715/source _original_basename=tmpykbw1dkr follow=False checksum=abc21b3971e70fb47653ad1df5ee2cc661041e3d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 02:59:31 localhost systemd[1]: session-4.scope: Deactivated successfully. Oct 14 02:59:31 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Oct 14 02:59:31 localhost systemd-logind[760]: Removed session 4. Oct 14 03:05:30 localhost sshd[6190]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:05:30 localhost systemd-logind[760]: New session 5 of user zuul. Oct 14 03:05:30 localhost systemd[1]: Started Session 5 of User zuul. Oct 14 03:05:30 localhost python3[6209]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-72d1-c0ab-000000001d20-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:31 localhost python3[6228]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:05:32 localhost python3[6244]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:05:32 localhost python3[6260]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:05:32 localhost python3[6276]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:05:33 localhost python3[6292]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:05:33 localhost python3[6292]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually Oct 14 03:05:34 localhost python3[6308]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 03:05:34 localhost systemd[1]: Reloading. Oct 14 03:05:34 localhost systemd-rc-local-generator[6327]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:05:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:05:36 localhost python3[6355]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Oct 14 03:05:37 localhost python3[6371]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:37 localhost python3[6389]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:38 localhost python3[6407]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:38 localhost python3[6425]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:39 localhost python3[6442]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-72d1-c0ab-000000001d26-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:05:40 localhost python3[6462]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:05:43 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Oct 14 03:05:43 localhost systemd[1]: session-5.scope: Deactivated successfully. Oct 14 03:05:43 localhost systemd[1]: session-5.scope: Consumed 3.315s CPU time. Oct 14 03:05:43 localhost systemd-logind[760]: Removed session 5. Oct 14 03:07:08 localhost sshd[6470]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:07:08 localhost systemd-logind[760]: New session 6 of user zuul. Oct 14 03:07:08 localhost systemd[1]: Started Session 6 of User zuul. Oct 14 03:07:09 localhost systemd[1]: Starting RHSM dbus service... Oct 14 03:07:09 localhost systemd[1]: Started RHSM dbus service. Oct 14 03:07:09 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:09 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:09 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:09 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:12 localhost rhsm-service[6494]: INFO [subscription_manager.managerlib:90] Consumer created: np0005486731.novalocal (fafbc9b5-5bcf-4c9b-8ea7-ff83fa6c70ff) Oct 14 03:07:12 localhost subscription-manager[6494]: Registered system with identity: fafbc9b5-5bcf-4c9b-8ea7-ff83fa6c70ff Oct 14 03:07:12 localhost rhsm-service[6494]: INFO [subscription_manager.entcertlib:131] certs updated: Oct 14 03:07:12 localhost rhsm-service[6494]: Total updates: 1 Oct 14 03:07:12 localhost rhsm-service[6494]: Found (local) serial# [] Oct 14 03:07:12 localhost rhsm-service[6494]: Expected (UEP) serial# [610876053760577793] Oct 14 03:07:12 localhost rhsm-service[6494]: Added (new) Oct 14 03:07:12 localhost rhsm-service[6494]: [sn:610876053760577793 ( Content Access,) @ /etc/pki/entitlement/610876053760577793.pem] Oct 14 03:07:12 localhost rhsm-service[6494]: Deleted (rogue): Oct 14 03:07:12 localhost rhsm-service[6494]: Oct 14 03:07:12 localhost subscription-manager[6494]: Added subscription for 'Content Access' contract 'None' Oct 14 03:07:12 localhost subscription-manager[6494]: Added subscription for product ' Content Access' Oct 14 03:07:15 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:15 localhost rhsm-service[6494]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 14 03:07:15 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:07:15 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:07:15 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:07:15 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:07:16 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:07:22 localhost python3[6585]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1d49-1048-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:07:23 localhost python3[6604]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:07:53 localhost setsebool[6679]: The virt_use_nfs policy boolean was changed to 1 by root Oct 14 03:07:53 localhost setsebool[6679]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Oct 14 03:08:05 localhost kernel: SELinux: Converting 409 SID table entries... Oct 14 03:08:05 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 03:08:05 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 03:08:05 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 03:08:05 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 03:08:05 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 03:08:05 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 03:08:05 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 03:08:14 localhost sshd[7422]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:08:17 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 14 03:08:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:08:17 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:08:17 localhost systemd[1]: Reloading. Oct 14 03:08:17 localhost systemd-rc-local-generator[7539]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:08:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:08:18 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 03:08:19 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:08:19 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:08:26 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:08:26 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:08:26 localhost systemd[1]: man-db-cache-update.service: Consumed 9.679s CPU time. Oct 14 03:08:26 localhost systemd[1]: run-r82b29e9cdb1a455884cf40889ac02538.service: Deactivated successfully. Oct 14 03:09:10 localhost systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3532785434-merged.mount: Deactivated successfully. Oct 14 03:09:10 localhost podman[18274]: 2025-10-14 07:09:10.177739336 +0000 UTC m=+0.103923784 system refresh Oct 14 03:09:11 localhost systemd[4190]: Starting D-Bus User Message Bus... Oct 14 03:09:11 localhost dbus-broker-launch[18331]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 14 03:09:11 localhost dbus-broker-launch[18331]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 14 03:09:11 localhost systemd[4190]: Started D-Bus User Message Bus. Oct 14 03:09:11 localhost journal[18331]: Ready Oct 14 03:09:11 localhost systemd[4190]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 14 03:09:11 localhost systemd[4190]: Created slice Slice /user. Oct 14 03:09:11 localhost systemd[4190]: podman-18314.scope: unit configures an IP firewall, but not running as root. Oct 14 03:09:11 localhost systemd[4190]: (This warning is only shown for the first unit using IP firewalling.) Oct 14 03:09:11 localhost systemd[4190]: Started podman-18314.scope. Oct 14 03:09:11 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:09:11 localhost systemd[4190]: Started podman-pause-71c3c09c.scope. Oct 14 03:09:13 localhost systemd[1]: session-6.scope: Deactivated successfully. Oct 14 03:09:13 localhost systemd[1]: session-6.scope: Consumed 49.826s CPU time. Oct 14 03:09:13 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Oct 14 03:09:13 localhost systemd-logind[760]: Removed session 6. Oct 14 03:09:28 localhost sshd[18335]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:28 localhost sshd[18338]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:28 localhost sshd[18337]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:28 localhost sshd[18334]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:28 localhost sshd[18336]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:34 localhost sshd[18344]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:09:34 localhost systemd-logind[760]: New session 7 of user zuul. Oct 14 03:09:34 localhost systemd[1]: Started Session 7 of User zuul. Oct 14 03:09:34 localhost python3[18361]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIq9But+/Hfc9J5vjzjHcMTQnDUUku1RFL7dcQIHYNLTUIGZ0AQaJy5Ycn5J06z6gzZ6xEr0ccDbinQsuD7Dk3c= zuul@np0005486725.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:09:35 localhost python3[18377]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIq9But+/Hfc9J5vjzjHcMTQnDUUku1RFL7dcQIHYNLTUIGZ0AQaJy5Ycn5J06z6gzZ6xEr0ccDbinQsuD7Dk3c= zuul@np0005486725.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:09:37 localhost systemd[1]: session-7.scope: Deactivated successfully. Oct 14 03:09:37 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Oct 14 03:09:37 localhost systemd-logind[760]: Removed session 7. Oct 14 03:10:59 localhost sshd[18379]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:10:59 localhost systemd-logind[760]: New session 8 of user zuul. Oct 14 03:10:59 localhost systemd[1]: Started Session 8 of User zuul. Oct 14 03:10:59 localhost python3[18398]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:11:00 localhost python3[18414]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 14 03:11:02 localhost python3[18464]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:11:02 localhost python3[18507]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760425862.2772765-132-130191192193478/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=6c04c38dfe8e446399a5e5f9dbe4740b_id_rsa follow=False checksum=ca0549f1043aa781cfe5001a3649a4105abf4f82 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:04 localhost python3[18569]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:11:04 localhost python3[18612]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760425863.864175-221-92484808793252/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=6c04c38dfe8e446399a5e5f9dbe4740b_id_rsa.pub follow=False checksum=8b573aa2906c160b2f7b53c64bd37790afdd4394 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:06 localhost python3[18642]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:07 localhost python3[18688]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:11:07 localhost python3[18704]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpc1v7rele recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:08 localhost python3[18764]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:11:09 localhost python3[18780]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpb9qqnpc_ recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:10 localhost python3[18840]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:11:10 localhost python3[18856]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmp8si7qrlk recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:11:11 localhost systemd[1]: session-8.scope: Deactivated successfully. Oct 14 03:11:11 localhost systemd[1]: session-8.scope: Consumed 3.648s CPU time. Oct 14 03:11:11 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Oct 14 03:11:11 localhost systemd-logind[760]: Removed session 8. Oct 14 03:13:13 localhost sshd[18874]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:13:13 localhost systemd-logind[760]: New session 9 of user zuul. Oct 14 03:13:13 localhost systemd[1]: Started Session 9 of User zuul. Oct 14 03:13:14 localhost python3[18920]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:18:13 localhost systemd[1]: session-9.scope: Deactivated successfully. Oct 14 03:18:13 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Oct 14 03:18:13 localhost systemd-logind[760]: Removed session 9. Oct 14 03:23:43 localhost sshd[18930]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:23:43 localhost systemd-logind[760]: New session 10 of user zuul. Oct 14 03:23:43 localhost systemd[1]: Started Session 10 of User zuul. Oct 14 03:23:44 localhost python3[18947]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-f848-5676-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:23:45 localhost python3[18967]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-f848-5676-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:23:50 localhost python3[18986]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Oct 14 03:23:53 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:23:53 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:24:54 localhost python3[19143]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Oct 14 03:24:57 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:24:57 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:05 localhost python3[19344]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Oct 14 03:25:08 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:08 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:14 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:14 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:37 localhost python3[19679]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 14 03:25:40 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:46 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:25:46 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:26:09 localhost python3[20014]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 14 03:26:12 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:26:17 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:26:18 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:26:42 localhost python3[20351]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-000000000013-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:26:47 localhost python3[20370]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:27:00 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Oct 14 03:27:09 localhost kernel: SELinux: Converting 500 SID table entries... Oct 14 03:27:09 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 03:27:09 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 03:27:09 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 03:27:09 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 03:27:09 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 03:27:09 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 03:27:09 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 03:27:11 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Oct 14 03:27:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:27:11 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:27:11 localhost systemd[1]: Reloading. Oct 14 03:27:11 localhost systemd-sysv-generator[21193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:27:11 localhost systemd-rc-local-generator[21188]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:27:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:27:11 localhost systemd[1]: Starting dnf makecache... Oct 14 03:27:11 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 03:27:12 localhost dnf[21316]: Updating Subscription Management repositories. Oct 14 03:27:12 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:27:12 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:27:12 localhost systemd[1]: run-r4406c8dfdabf4c8e97969e6af8463955.service: Deactivated successfully. Oct 14 03:27:13 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:27:13 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 03:27:13 localhost dnf[21316]: Failed determining last makecache time. Oct 14 03:27:13 localhost dnf[21316]: Fast Datapath for RHEL 9 x86_64 (RPMs) 46 kB/s | 4.0 kB 00:00 Oct 14 03:27:13 localhost dnf[21316]: Red Hat Enterprise Linux 9 for x86_64 - High Av 43 kB/s | 4.0 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 53 kB/s | 4.5 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 50 kB/s | 4.0 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 52 kB/s | 4.5 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 49 kB/s | 4.1 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 50 kB/s | 4.1 kB 00:00 Oct 14 03:27:14 localhost dnf[21316]: Metadata cache created. Oct 14 03:27:15 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 14 03:27:15 localhost systemd[1]: Finished dnf makecache. Oct 14 03:27:15 localhost systemd[1]: dnf-makecache.service: Consumed 2.660s CPU time. Oct 14 03:27:37 localhost python3[21806]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-000000000015-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:28:04 localhost python3[21827]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:28:05 localhost python3[21875]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:28:06 localhost python3[21918]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760426885.370239-291-217580853320079/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=3358dfc6c6ce646155135d0cad900026cb34ba08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:28:07 localhost python3[21948]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 14 03:28:07 localhost systemd-journald[618]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 91.6 (305 of 333 items), suggesting rotation. Oct 14 03:28:07 localhost systemd-journald[618]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 03:28:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 03:28:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 03:28:08 localhost python3[21969]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 14 03:28:08 localhost python3[21989]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 14 03:28:08 localhost python3[22009]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 14 03:28:09 localhost python3[22029]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 14 03:28:12 localhost python3[22049]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 03:28:12 localhost systemd[1]: Starting LSB: Bring up/down networking... Oct 14 03:28:12 localhost network[22052]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 03:28:12 localhost network[22063]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 03:28:12 localhost network[22052]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:12 localhost network[22064]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:12 localhost network[22052]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 14 03:28:12 localhost network[22065]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 03:28:12 localhost NetworkManager[5972]: [1760426892.4900] audit: op="connections-reload" pid=22093 uid=0 result="success" Oct 14 03:28:12 localhost network[22052]: Bringing up loopback interface: [ OK ] Oct 14 03:28:12 localhost NetworkManager[5972]: [1760426892.6726] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22181 uid=0 result="success" Oct 14 03:28:12 localhost network[22052]: Bringing up interface eth0: [ OK ] Oct 14 03:28:12 localhost systemd[1]: Started LSB: Bring up/down networking. Oct 14 03:28:13 localhost python3[22222]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 03:28:13 localhost systemd[1]: Starting Open vSwitch Database Unit... Oct 14 03:28:13 localhost chown[22226]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Oct 14 03:28:13 localhost ovs-ctl[22231]: /etc/openvswitch/conf.db does not exist ... (warning). Oct 14 03:28:13 localhost ovs-ctl[22231]: Creating empty database /etc/openvswitch/conf.db [ OK ] Oct 14 03:28:13 localhost ovs-ctl[22231]: Starting ovsdb-server [ OK ] Oct 14 03:28:13 localhost ovs-vsctl[22280]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Oct 14 03:28:13 localhost ovs-vsctl[22300]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-110.el9fdp "external-ids:system-id=\"5830d1b9-dd16-4a23-879b-f28430ab4793\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Oct 14 03:28:13 localhost ovs-ctl[22231]: Configuring Open vSwitch system IDs [ OK ] Oct 14 03:28:13 localhost ovs-ctl[22231]: Enabling remote OVSDB managers [ OK ] Oct 14 03:28:13 localhost systemd[1]: Started Open vSwitch Database Unit. Oct 14 03:28:13 localhost ovs-vsctl[22306]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005486731.novalocal Oct 14 03:28:13 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Oct 14 03:28:13 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Oct 14 03:28:13 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Oct 14 03:28:13 localhost kernel: openvswitch: Open vSwitch switching datapath Oct 14 03:28:13 localhost ovs-ctl[22351]: Inserting openvswitch module [ OK ] Oct 14 03:28:13 localhost ovs-ctl[22319]: Starting ovs-vswitchd [ OK ] Oct 14 03:28:13 localhost ovs-vsctl[22370]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005486731.novalocal Oct 14 03:28:13 localhost ovs-ctl[22319]: Enabling remote OVSDB managers [ OK ] Oct 14 03:28:13 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Oct 14 03:28:13 localhost systemd[1]: Starting Open vSwitch... Oct 14 03:28:13 localhost systemd[1]: Finished Open vSwitch. Oct 14 03:28:17 localhost python3[22388]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-00000000001a-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.3814] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22546 uid=0 result="success" Oct 14 03:28:18 localhost ifup[22547]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:18 localhost ifup[22548]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:18 localhost ifup[22549]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.4126] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22555 uid=0 result="success" Oct 14 03:28:18 localhost ovs-vsctl[22557]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:37:09:c3 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Oct 14 03:28:18 localhost kernel: device ovs-system entered promiscuous mode Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.4414] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Oct 14 03:28:18 localhost kernel: Timeout policy base is empty Oct 14 03:28:18 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Oct 14 03:28:18 localhost systemd-udevd[22559]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:18 localhost kernel: device br-ex entered promiscuous mode Oct 14 03:28:18 localhost systemd-udevd[22571]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.4865] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.5145] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22584 uid=0 result="success" Oct 14 03:28:18 localhost NetworkManager[5972]: [1760426898.5349] device (br-ex): carrier: link connected Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.5904] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22613 uid=0 result="success" Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.6408] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22628 uid=0 result="success" Oct 14 03:28:21 localhost NET[22653]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7311] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7538] dhcp4 (eth1): canceled DHCP transaction Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7540] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7540] dhcp4 (eth1): state changed no lease Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7585] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22662 uid=0 result="success" Oct 14 03:28:21 localhost ifup[22663]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:21 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 14 03:28:21 localhost ifup[22664]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:21 localhost ifup[22666]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:21 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.7948] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22680 uid=0 result="success" Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.8575] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22690 uid=0 result="success" Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.8665] device (eth1): carrier: link connected Oct 14 03:28:21 localhost NetworkManager[5972]: [1760426901.8897] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22699 uid=0 result="success" Oct 14 03:28:21 localhost ipv6_wait_tentative[22711]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 14 03:28:22 localhost ipv6_wait_tentative[22716]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 14 03:28:23 localhost NetworkManager[5972]: [1760426903.9656] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22726 uid=0 result="success" Oct 14 03:28:24 localhost ovs-vsctl[22741]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Oct 14 03:28:24 localhost kernel: device eth1 entered promiscuous mode Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.0452] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22748 uid=0 result="success" Oct 14 03:28:24 localhost ifup[22749]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:24 localhost ifup[22750]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:24 localhost ifup[22751]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.0772] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22757 uid=0 result="success" Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.1156] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22767 uid=0 result="success" Oct 14 03:28:24 localhost ifup[22768]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:24 localhost ifup[22769]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:24 localhost ifup[22770]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.1456] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22776 uid=0 result="success" Oct 14 03:28:24 localhost ovs-vsctl[22779]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 14 03:28:24 localhost kernel: device vlan20 entered promiscuous mode Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.1871] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Oct 14 03:28:24 localhost systemd-udevd[22781]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.2103] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22790 uid=0 result="success" Oct 14 03:28:24 localhost NetworkManager[5972]: [1760426904.2294] device (vlan20): carrier: link connected Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.2806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22819 uid=0 result="success" Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.3331] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22834 uid=0 result="success" Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.3966] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22855 uid=0 result="success" Oct 14 03:28:27 localhost ifup[22856]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:27 localhost ifup[22857]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:27 localhost ifup[22858]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.4301] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22864 uid=0 result="success" Oct 14 03:28:27 localhost ovs-vsctl[22867]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 14 03:28:27 localhost kernel: device vlan23 entered promiscuous mode Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.4705] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Oct 14 03:28:27 localhost systemd-udevd[22869]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.4948] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22879 uid=0 result="success" Oct 14 03:28:27 localhost NetworkManager[5972]: [1760426907.5149] device (vlan23): carrier: link connected Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.5734] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22909 uid=0 result="success" Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.6184] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22924 uid=0 result="success" Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.6749] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22945 uid=0 result="success" Oct 14 03:28:30 localhost ifup[22946]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:30 localhost ifup[22947]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:30 localhost ifup[22948]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.7084] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22954 uid=0 result="success" Oct 14 03:28:30 localhost ovs-vsctl[22957]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 14 03:28:30 localhost kernel: device vlan21 entered promiscuous mode Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.7728] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Oct 14 03:28:30 localhost systemd-udevd[22959]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.7987] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22969 uid=0 result="success" Oct 14 03:28:30 localhost NetworkManager[5972]: [1760426910.8202] device (vlan21): carrier: link connected Oct 14 03:28:31 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 14 03:28:33 localhost NetworkManager[5972]: [1760426913.8669] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22999 uid=0 result="success" Oct 14 03:28:33 localhost NetworkManager[5972]: [1760426913.9134] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23014 uid=0 result="success" Oct 14 03:28:33 localhost NetworkManager[5972]: [1760426913.9701] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23035 uid=0 result="success" Oct 14 03:28:33 localhost ifup[23036]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:33 localhost ifup[23037]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:33 localhost ifup[23038]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:34 localhost NetworkManager[5972]: [1760426914.0005] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23044 uid=0 result="success" Oct 14 03:28:34 localhost ovs-vsctl[23047]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 14 03:28:34 localhost kernel: device vlan44 entered promiscuous mode Oct 14 03:28:34 localhost NetworkManager[5972]: [1760426914.0395] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Oct 14 03:28:34 localhost systemd-udevd[23049]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:34 localhost NetworkManager[5972]: [1760426914.0664] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23059 uid=0 result="success" Oct 14 03:28:34 localhost NetworkManager[5972]: [1760426914.0907] device (vlan44): carrier: link connected Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.1396] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23089 uid=0 result="success" Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.1863] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23104 uid=0 result="success" Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.2396] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23125 uid=0 result="success" Oct 14 03:28:37 localhost ifup[23126]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:37 localhost ifup[23127]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:37 localhost ifup[23128]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.2681] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23134 uid=0 result="success" Oct 14 03:28:37 localhost ovs-vsctl[23137]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 14 03:28:37 localhost systemd-udevd[23139]: Network interface NamePolicy= disabled on kernel command line. Oct 14 03:28:37 localhost kernel: device vlan22 entered promiscuous mode Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.3051] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.3285] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23149 uid=0 result="success" Oct 14 03:28:37 localhost NetworkManager[5972]: [1760426917.3498] device (vlan22): carrier: link connected Oct 14 03:28:40 localhost NetworkManager[5972]: [1760426920.4031] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23179 uid=0 result="success" Oct 14 03:28:40 localhost NetworkManager[5972]: [1760426920.4563] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23194 uid=0 result="success" Oct 14 03:28:40 localhost NetworkManager[5972]: [1760426920.5193] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23215 uid=0 result="success" Oct 14 03:28:40 localhost ifup[23216]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:40 localhost ifup[23217]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:40 localhost ifup[23218]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:40 localhost NetworkManager[5972]: [1760426920.5518] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23224 uid=0 result="success" Oct 14 03:28:40 localhost ovs-vsctl[23227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 14 03:28:40 localhost NetworkManager[5972]: [1760426920.6554] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23234 uid=0 result="success" Oct 14 03:28:41 localhost NetworkManager[5972]: [1760426921.7042] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23261 uid=0 result="success" Oct 14 03:28:41 localhost NetworkManager[5972]: [1760426921.7493] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23276 uid=0 result="success" Oct 14 03:28:41 localhost NetworkManager[5972]: [1760426921.8061] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23297 uid=0 result="success" Oct 14 03:28:41 localhost ifup[23298]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:41 localhost ifup[23299]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:41 localhost ifup[23300]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:41 localhost NetworkManager[5972]: [1760426921.8356] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23306 uid=0 result="success" Oct 14 03:28:41 localhost ovs-vsctl[23309]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 14 03:28:41 localhost NetworkManager[5972]: [1760426921.9208] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23316 uid=0 result="success" Oct 14 03:28:42 localhost NetworkManager[5972]: [1760426922.9833] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23344 uid=0 result="success" Oct 14 03:28:43 localhost NetworkManager[5972]: [1760426923.0323] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23359 uid=0 result="success" Oct 14 03:28:43 localhost NetworkManager[5972]: [1760426923.0931] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23380 uid=0 result="success" Oct 14 03:28:43 localhost ifup[23381]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:43 localhost ifup[23382]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:43 localhost ifup[23383]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:43 localhost NetworkManager[5972]: [1760426923.1246] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23389 uid=0 result="success" Oct 14 03:28:43 localhost ovs-vsctl[23392]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 14 03:28:43 localhost NetworkManager[5972]: [1760426923.2308] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23399 uid=0 result="success" Oct 14 03:28:44 localhost NetworkManager[5972]: [1760426924.2896] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23427 uid=0 result="success" Oct 14 03:28:44 localhost NetworkManager[5972]: [1760426924.3337] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23442 uid=0 result="success" Oct 14 03:28:44 localhost NetworkManager[5972]: [1760426924.3817] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23463 uid=0 result="success" Oct 14 03:28:44 localhost ifup[23464]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:44 localhost ifup[23465]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:44 localhost ifup[23466]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:44 localhost NetworkManager[5972]: [1760426924.4043] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23472 uid=0 result="success" Oct 14 03:28:44 localhost ovs-vsctl[23475]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 14 03:28:44 localhost NetworkManager[5972]: [1760426924.4844] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23482 uid=0 result="success" Oct 14 03:28:45 localhost NetworkManager[5972]: [1760426925.5412] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23510 uid=0 result="success" Oct 14 03:28:45 localhost NetworkManager[5972]: [1760426925.5885] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23525 uid=0 result="success" Oct 14 03:28:45 localhost NetworkManager[5972]: [1760426925.6508] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23546 uid=0 result="success" Oct 14 03:28:45 localhost ifup[23547]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 14 03:28:45 localhost ifup[23548]: 'network-scripts' will be removed from distribution in near future. Oct 14 03:28:45 localhost ifup[23549]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 14 03:28:45 localhost NetworkManager[5972]: [1760426925.6838] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23555 uid=0 result="success" Oct 14 03:28:45 localhost ovs-vsctl[23558]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 14 03:28:45 localhost NetworkManager[5972]: [1760426925.7371] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23565 uid=0 result="success" Oct 14 03:28:46 localhost NetworkManager[5972]: [1760426926.7965] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23593 uid=0 result="success" Oct 14 03:28:46 localhost NetworkManager[5972]: [1760426926.8460] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23608 uid=0 result="success" Oct 14 03:29:39 localhost python3[23640]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-00000000001b-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:29:45 localhost python3[23659]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:29:46 localhost python3[23675]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:29:47 localhost python3[23689]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:29:48 localhost python3[23705]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 14 03:29:48 localhost sshd[23706]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:29:49 localhost python3[23720]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Oct 14 03:29:50 localhost python3[23735]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005486731.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-000000000022-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:29:50 localhost python3[23755]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f848-5676-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:29:50 localhost systemd[1]: Starting Hostname Service... Oct 14 03:29:51 localhost systemd[1]: Started Hostname Service. Oct 14 03:29:51 localhost systemd-hostnamed[23759]: Hostname set to (static) Oct 14 03:29:51 localhost NetworkManager[5972]: [1760426991.0052] hostname: static hostname changed from "np0005486731.novalocal" to "np0005486731.localdomain" Oct 14 03:29:51 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 14 03:29:51 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 14 03:29:52 localhost systemd[1]: session-10.scope: Deactivated successfully. Oct 14 03:29:52 localhost systemd[1]: session-10.scope: Consumed 1min 43.711s CPU time. Oct 14 03:29:52 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Oct 14 03:29:52 localhost systemd-logind[760]: Removed session 10. Oct 14 03:29:54 localhost sshd[23770]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:29:54 localhost systemd-logind[760]: New session 11 of user zuul. Oct 14 03:29:54 localhost systemd[1]: Started Session 11 of User zuul. Oct 14 03:29:55 localhost python3[23787]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 14 03:29:57 localhost systemd[1]: session-11.scope: Deactivated successfully. Oct 14 03:29:57 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Oct 14 03:29:57 localhost systemd-logind[760]: Removed session 11. Oct 14 03:30:01 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 14 03:30:21 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 14 03:30:42 localhost sshd[23791]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:30:42 localhost systemd-logind[760]: New session 12 of user zuul. Oct 14 03:30:42 localhost systemd[1]: Started Session 12 of User zuul. Oct 14 03:30:43 localhost python3[23810]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:30:47 localhost systemd[1]: Reloading. Oct 14 03:30:47 localhost systemd-rc-local-generator[23850]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:30:47 localhost systemd-sysv-generator[23858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:30:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:30:47 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Oct 14 03:30:47 localhost systemd[1]: Reloading. Oct 14 03:30:47 localhost systemd-rc-local-generator[23892]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:30:47 localhost systemd-sysv-generator[23897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:30:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:30:48 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Oct 14 03:30:48 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Oct 14 03:30:48 localhost systemd[1]: Reloading. Oct 14 03:30:48 localhost systemd-rc-local-generator[23932]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:30:48 localhost systemd-sysv-generator[23938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:30:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:30:48 localhost systemd[1]: Listening on LVM2 poll daemon socket. Oct 14 03:30:48 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:30:48 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:30:48 localhost systemd[1]: Reloading. Oct 14 03:30:48 localhost systemd-rc-local-generator[24003]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:30:48 localhost systemd-sysv-generator[24008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:30:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:30:48 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 03:30:48 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:30:49 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:30:49 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:30:49 localhost systemd[1]: run-rae74265ce4514b44b8c3eed62eb99b0b.service: Deactivated successfully. Oct 14 03:30:49 localhost systemd[1]: run-r30ff8e7d40dd43a6a084bd884d628143.service: Deactivated successfully. Oct 14 03:31:49 localhost systemd[1]: session-12.scope: Deactivated successfully. Oct 14 03:31:49 localhost systemd[1]: session-12.scope: Consumed 5.202s CPU time. Oct 14 03:31:49 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Oct 14 03:31:49 localhost systemd-logind[760]: Removed session 12. Oct 14 03:32:53 localhost sshd[24584]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:47:58 localhost sshd[24591]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:47:58 localhost systemd-logind[760]: New session 13 of user zuul. Oct 14 03:47:58 localhost systemd[1]: Started Session 13 of User zuul. Oct 14 03:47:58 localhost python3[24639]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 03:48:00 localhost python3[24726]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:48:03 localhost python3[24743]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:48:04 localhost python3[24760]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:04 localhost kernel: loop: module loaded Oct 14 03:48:04 localhost kernel: loop3: detected capacity change from 0 to 14680064 Oct 14 03:48:04 localhost python3[24785]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:04 localhost lvm[24788]: PV /dev/loop3 not used. Oct 14 03:48:04 localhost lvm[24790]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 14 03:48:05 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Oct 14 03:48:05 localhost lvm[24798]: 1 logical volume(s) in volume group "ceph_vg0" now active Oct 14 03:48:05 localhost lvm[24800]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 14 03:48:05 localhost lvm[24800]: VG ceph_vg0 finished Oct 14 03:48:05 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Oct 14 03:48:05 localhost python3[24849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:48:06 localhost python3[24892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428085.3228009-55034-116599793637386/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:06 localhost python3[24922]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:48:08 localhost systemd[1]: Reloading. Oct 14 03:48:08 localhost systemd-sysv-generator[24957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:48:08 localhost systemd-rc-local-generator[24953]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:48:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:48:08 localhost systemd[1]: Starting Ceph OSD losetup... Oct 14 03:48:08 localhost bash[24965]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Oct 14 03:48:08 localhost systemd[1]: Finished Ceph OSD losetup. Oct 14 03:48:08 localhost lvm[24967]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 14 03:48:08 localhost lvm[24967]: VG ceph_vg0 finished Oct 14 03:48:08 localhost python3[24983]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:48:11 localhost python3[25000]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:48:12 localhost python3[25016]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:12 localhost kernel: loop4: detected capacity change from 0 to 14680064 Oct 14 03:48:12 localhost python3[25038]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:13 localhost lvm[25041]: PV /dev/loop4 not used. Oct 14 03:48:13 localhost lvm[25051]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 14 03:48:13 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Oct 14 03:48:13 localhost lvm[25053]: 1 logical volume(s) in volume group "ceph_vg1" now active Oct 14 03:48:13 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Oct 14 03:48:13 localhost python3[25101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:48:14 localhost python3[25144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428093.4385538-55204-186527947451728/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:14 localhost python3[25174]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:48:14 localhost systemd[1]: Reloading. Oct 14 03:48:14 localhost systemd-rc-local-generator[25197]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:48:14 localhost systemd-sysv-generator[25202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:48:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:48:15 localhost systemd[1]: Starting Ceph OSD losetup... Oct 14 03:48:15 localhost bash[25214]: /dev/loop4: [64516]:8606979 (/var/lib/ceph-osd-1.img) Oct 14 03:48:15 localhost systemd[1]: Finished Ceph OSD losetup. Oct 14 03:48:15 localhost lvm[25215]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 14 03:48:15 localhost lvm[25215]: VG ceph_vg1 finished Oct 14 03:48:23 localhost python3[25260]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 14 03:48:24 localhost python3[25280]: ansible-hostname Invoked with name=np0005486731.localdomain use=None Oct 14 03:48:24 localhost systemd[1]: Starting Hostname Service... Oct 14 03:48:24 localhost systemd[1]: Started Hostname Service. Oct 14 03:48:27 localhost python3[25303]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 14 03:48:27 localhost python3[25351]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.0_vnfu9qtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:28 localhost python3[25381]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.0_vnfu9qtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:28 localhost python3[25397]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.0_vnfu9qtmphosts insertbefore=BOF block=192.168.122.106 np0005486731.localdomain np0005486731#012192.168.122.106 np0005486731.ctlplane.localdomain np0005486731.ctlplane#012192.168.122.107 np0005486732.localdomain np0005486732#012192.168.122.107 np0005486732.ctlplane.localdomain np0005486732.ctlplane#012192.168.122.108 np0005486733.localdomain np0005486733#012192.168.122.108 np0005486733.ctlplane.localdomain np0005486733.ctlplane#012192.168.122.103 np0005486728.localdomain np0005486728#012192.168.122.103 np0005486728.ctlplane.localdomain np0005486728.ctlplane#012192.168.122.104 np0005486729.localdomain np0005486729#012192.168.122.104 np0005486729.ctlplane.localdomain np0005486729.ctlplane#012192.168.122.105 np0005486730.localdomain np0005486730#012192.168.122.105 np0005486730.ctlplane.localdomain np0005486730.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:29 localhost python3[25413]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.0_vnfu9qtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:29 localhost python3[25430]: ansible-file Invoked with path=/tmp/ansible.0_vnfu9qtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:31 localhost python3[25446]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:32 localhost python3[25464]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:48:36 localhost python3[25513]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:48:37 localhost python3[25558]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428116.4238293-56059-3810091537074/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:38 localhost python3[25588]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:48:39 localhost python3[25606]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 03:48:40 localhost chronyd[766]: chronyd exiting Oct 14 03:48:40 localhost systemd[1]: Stopping NTP client/server... Oct 14 03:48:40 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 14 03:48:40 localhost systemd[1]: Stopped NTP client/server. Oct 14 03:48:40 localhost systemd[1]: chronyd.service: Consumed 107ms CPU time, read 1.9M from disk, written 4.0K to disk. Oct 14 03:48:40 localhost systemd[1]: Starting NTP client/server... Oct 14 03:48:40 localhost chronyd[25614]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 14 03:48:40 localhost chronyd[25614]: Frequency -25.941 +/- 0.040 ppm read from /var/lib/chrony/drift Oct 14 03:48:40 localhost chronyd[25614]: Loaded seccomp filter (level 2) Oct 14 03:48:40 localhost systemd[1]: Started NTP client/server. Oct 14 03:48:41 localhost python3[25663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:48:41 localhost python3[25706]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428120.7700906-56291-56678226975058/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:48:42 localhost python3[25736]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:48:42 localhost systemd[1]: Reloading. Oct 14 03:48:42 localhost systemd-rc-local-generator[25757]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:48:42 localhost systemd-sysv-generator[25764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:48:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:48:42 localhost systemd[1]: Reloading. Oct 14 03:48:42 localhost systemd-rc-local-generator[25800]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:48:42 localhost systemd-sysv-generator[25804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:48:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:48:42 localhost systemd[1]: Starting chronyd online sources service... Oct 14 03:48:42 localhost chronyc[25813]: 200 OK Oct 14 03:48:42 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 14 03:48:42 localhost systemd[1]: Finished chronyd online sources service. Oct 14 03:48:43 localhost python3[25829]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:43 localhost chronyd[25614]: System clock was stepped by 0.000000 seconds Oct 14 03:48:43 localhost python3[25846]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:48:44 localhost chronyd[25614]: Selected source 216.128.178.20 (pool.ntp.org) Oct 14 03:48:54 localhost python3[25863]: ansible-timezone Invoked with name=UTC hwclock=None Oct 14 03:48:54 localhost systemd[1]: Starting Time & Date Service... Oct 14 03:48:54 localhost systemd[1]: Started Time & Date Service. Oct 14 03:48:54 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 14 03:48:56 localhost python3[25886]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 03:48:56 localhost chronyd[25614]: chronyd exiting Oct 14 03:48:56 localhost systemd[1]: Stopping NTP client/server... Oct 14 03:48:56 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 14 03:48:56 localhost systemd[1]: Stopped NTP client/server. Oct 14 03:48:56 localhost systemd[1]: Starting NTP client/server... Oct 14 03:48:56 localhost chronyd[25893]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 14 03:48:56 localhost chronyd[25893]: Frequency -25.941 +/- 0.041 ppm read from /var/lib/chrony/drift Oct 14 03:48:56 localhost chronyd[25893]: Loaded seccomp filter (level 2) Oct 14 03:48:56 localhost systemd[1]: Started NTP client/server. Oct 14 03:49:00 localhost chronyd[25893]: Selected source 216.128.178.20 (pool.ntp.org) Oct 14 03:49:24 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 14 03:50:53 localhost sshd[26090]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:53 localhost systemd-logind[760]: New session 14 of user ceph-admin. Oct 14 03:50:53 localhost systemd[1]: Created slice User Slice of UID 1002. Oct 14 03:50:53 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Oct 14 03:50:53 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Oct 14 03:50:53 localhost systemd[1]: Starting User Manager for UID 1002... Oct 14 03:50:53 localhost sshd[26107]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:53 localhost systemd[26094]: Queued start job for default target Main User Target. Oct 14 03:50:53 localhost systemd[26094]: Created slice User Application Slice. Oct 14 03:50:53 localhost systemd[26094]: Started Mark boot as successful after the user session has run 2 minutes. Oct 14 03:50:53 localhost systemd[26094]: Started Daily Cleanup of User's Temporary Directories. Oct 14 03:50:53 localhost systemd[26094]: Reached target Paths. Oct 14 03:50:53 localhost systemd[26094]: Reached target Timers. Oct 14 03:50:53 localhost systemd[26094]: Starting D-Bus User Message Bus Socket... Oct 14 03:50:53 localhost systemd[26094]: Starting Create User's Volatile Files and Directories... Oct 14 03:50:53 localhost systemd[26094]: Listening on D-Bus User Message Bus Socket. Oct 14 03:50:53 localhost systemd[26094]: Reached target Sockets. Oct 14 03:50:53 localhost systemd[26094]: Finished Create User's Volatile Files and Directories. Oct 14 03:50:53 localhost systemd[26094]: Reached target Basic System. Oct 14 03:50:53 localhost systemd[26094]: Reached target Main User Target. Oct 14 03:50:53 localhost systemd[26094]: Startup finished in 120ms. Oct 14 03:50:53 localhost systemd[1]: Started User Manager for UID 1002. Oct 14 03:50:53 localhost systemd[1]: Started Session 14 of User ceph-admin. Oct 14 03:50:53 localhost systemd-logind[760]: New session 16 of user ceph-admin. Oct 14 03:50:53 localhost systemd[1]: Started Session 16 of User ceph-admin. Oct 14 03:50:53 localhost sshd[26129]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:54 localhost systemd-logind[760]: New session 17 of user ceph-admin. Oct 14 03:50:54 localhost systemd[1]: Started Session 17 of User ceph-admin. Oct 14 03:50:54 localhost sshd[26148]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:54 localhost systemd-logind[760]: New session 18 of user ceph-admin. Oct 14 03:50:54 localhost systemd[1]: Started Session 18 of User ceph-admin. Oct 14 03:50:54 localhost sshd[26167]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:54 localhost systemd-logind[760]: New session 19 of user ceph-admin. Oct 14 03:50:54 localhost systemd[1]: Started Session 19 of User ceph-admin. Oct 14 03:50:55 localhost sshd[26186]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:55 localhost systemd-logind[760]: New session 20 of user ceph-admin. Oct 14 03:50:55 localhost systemd[1]: Started Session 20 of User ceph-admin. Oct 14 03:50:55 localhost sshd[26205]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:55 localhost systemd-logind[760]: New session 21 of user ceph-admin. Oct 14 03:50:55 localhost systemd[1]: Started Session 21 of User ceph-admin. Oct 14 03:50:55 localhost sshd[26224]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:56 localhost systemd-logind[760]: New session 22 of user ceph-admin. Oct 14 03:50:56 localhost systemd[1]: Started Session 22 of User ceph-admin. Oct 14 03:50:56 localhost sshd[26243]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:56 localhost systemd-logind[760]: New session 23 of user ceph-admin. Oct 14 03:50:56 localhost systemd[1]: Started Session 23 of User ceph-admin. Oct 14 03:50:56 localhost sshd[26262]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:56 localhost systemd-logind[760]: New session 24 of user ceph-admin. Oct 14 03:50:56 localhost systemd[1]: Started Session 24 of User ceph-admin. Oct 14 03:50:57 localhost sshd[26279]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:57 localhost systemd-logind[760]: New session 25 of user ceph-admin. Oct 14 03:50:57 localhost systemd[1]: Started Session 25 of User ceph-admin. Oct 14 03:50:57 localhost sshd[26298]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:50:57 localhost systemd-logind[760]: New session 26 of user ceph-admin. Oct 14 03:50:57 localhost systemd[1]: Started Session 26 of User ceph-admin. Oct 14 03:50:58 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:23 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:23 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:23 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26513 (sysctl) Oct 14 03:51:23 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Oct 14 03:51:23 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Oct 14 03:51:24 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:24 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:28 localhost kernel: VFS: idmapped mount is not enabled. Oct 14 03:51:46 localhost podman[26652]: Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:46.281533629 +0000 UTC m=+21.360114633 container create e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Oct 14 03:51:46 localhost systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck60357613-merged.mount: Deactivated successfully. Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:24.952310763 +0000 UTC m=+0.030891797 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:51:46 localhost systemd[1]: Created slice Slice /machine. Oct 14 03:51:46 localhost systemd[1]: Started libpod-conmon-e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88.scope. Oct 14 03:51:46 localhost systemd[1]: Started libcrun container. Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:46.395905505 +0000 UTC m=+21.474486529 container init e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:46.403050497 +0000 UTC m=+21.481631531 container start e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:46.403335692 +0000 UTC m=+21.481916726 container attach e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 03:51:46 localhost gifted_engelbart[26792]: 167 167 Oct 14 03:51:46 localhost systemd[1]: libpod-e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88.scope: Deactivated successfully. Oct 14 03:51:46 localhost podman[26652]: 2025-10-14 07:51:46.405983611 +0000 UTC m=+21.484564675 container died e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public) Oct 14 03:51:46 localhost podman[26797]: 2025-10-14 07:51:46.505134738 +0000 UTC m=+0.090678303 container remove e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_engelbart, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553) Oct 14 03:51:46 localhost systemd[1]: libpod-conmon-e98f2de4ec85f79dae54e60b17a7b7d7e1c06f305efb7b8f64d1e25297b22c88.scope: Deactivated successfully. Oct 14 03:51:46 localhost podman[26816]: Oct 14 03:51:46 localhost podman[26816]: 2025-10-14 07:51:46.71573892 +0000 UTC m=+0.070322801 container create d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, distribution-scope=public, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 14 03:51:46 localhost systemd[1]: Started libpod-conmon-d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327.scope. Oct 14 03:51:46 localhost systemd[1]: Started libcrun container. Oct 14 03:51:46 localhost podman[26816]: 2025-10-14 07:51:46.676619363 +0000 UTC m=+0.031203224 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:51:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99218fba3d511bcb6f74d4bac65ecc65578a06db81bc9c554928597718e237c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:51:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99218fba3d511bcb6f74d4bac65ecc65578a06db81bc9c554928597718e237c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:51:46 localhost podman[26816]: 2025-10-14 07:51:46.807179846 +0000 UTC m=+0.161763707 container init d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Oct 14 03:51:46 localhost podman[26816]: 2025-10-14 07:51:46.81886808 +0000 UTC m=+0.173451911 container start d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True) Oct 14 03:51:46 localhost podman[26816]: 2025-10-14 07:51:46.819081894 +0000 UTC m=+0.173665785 container attach d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Oct 14 03:51:47 localhost systemd[1]: var-lib-containers-storage-overlay-4672a45caed4439cbab5cec1c715e20a5bf0eba54c3b454b3c48090c9713ec9a-merged.mount: Deactivated successfully. Oct 14 03:51:47 localhost affectionate_saha[26831]: [ Oct 14 03:51:47 localhost affectionate_saha[26831]: { Oct 14 03:51:47 localhost affectionate_saha[26831]: "available": false, Oct 14 03:51:47 localhost affectionate_saha[26831]: "ceph_device": false, Oct 14 03:51:47 localhost affectionate_saha[26831]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 03:51:47 localhost affectionate_saha[26831]: "lsm_data": {}, Oct 14 03:51:47 localhost affectionate_saha[26831]: "lvs": [], Oct 14 03:51:47 localhost affectionate_saha[26831]: "path": "/dev/sr0", Oct 14 03:51:47 localhost affectionate_saha[26831]: "rejected_reasons": [ Oct 14 03:51:47 localhost affectionate_saha[26831]: "Insufficient space (<5GB)", Oct 14 03:51:47 localhost affectionate_saha[26831]: "Has a FileSystem" Oct 14 03:51:47 localhost affectionate_saha[26831]: ], Oct 14 03:51:47 localhost affectionate_saha[26831]: "sys_api": { Oct 14 03:51:47 localhost affectionate_saha[26831]: "actuators": null, Oct 14 03:51:47 localhost affectionate_saha[26831]: "device_nodes": "sr0", Oct 14 03:51:47 localhost affectionate_saha[26831]: "human_readable_size": "482.00 KB", Oct 14 03:51:47 localhost affectionate_saha[26831]: "id_bus": "ata", Oct 14 03:51:47 localhost affectionate_saha[26831]: "model": "QEMU DVD-ROM", Oct 14 03:51:47 localhost affectionate_saha[26831]: "nr_requests": "2", Oct 14 03:51:47 localhost affectionate_saha[26831]: "partitions": {}, Oct 14 03:51:47 localhost affectionate_saha[26831]: "path": "/dev/sr0", Oct 14 03:51:47 localhost affectionate_saha[26831]: "removable": "1", Oct 14 03:51:47 localhost affectionate_saha[26831]: "rev": "2.5+", Oct 14 03:51:47 localhost affectionate_saha[26831]: "ro": "0", Oct 14 03:51:47 localhost affectionate_saha[26831]: "rotational": "1", Oct 14 03:51:47 localhost affectionate_saha[26831]: "sas_address": "", Oct 14 03:51:47 localhost affectionate_saha[26831]: "sas_device_handle": "", Oct 14 03:51:47 localhost affectionate_saha[26831]: "scheduler_mode": "mq-deadline", Oct 14 03:51:47 localhost affectionate_saha[26831]: "sectors": 0, Oct 14 03:51:47 localhost affectionate_saha[26831]: "sectorsize": "2048", Oct 14 03:51:47 localhost affectionate_saha[26831]: "size": 493568.0, Oct 14 03:51:47 localhost affectionate_saha[26831]: "support_discard": "0", Oct 14 03:51:47 localhost affectionate_saha[26831]: "type": "disk", Oct 14 03:51:47 localhost affectionate_saha[26831]: "vendor": "QEMU" Oct 14 03:51:47 localhost affectionate_saha[26831]: } Oct 14 03:51:47 localhost affectionate_saha[26831]: } Oct 14 03:51:47 localhost affectionate_saha[26831]: ] Oct 14 03:51:47 localhost systemd[1]: libpod-d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327.scope: Deactivated successfully. Oct 14 03:51:47 localhost podman[27960]: 2025-10-14 07:51:47.750574463 +0000 UTC m=+0.035950540 container died d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., distribution-scope=public, version=7, release=553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Oct 14 03:51:47 localhost systemd[1]: tmp-crun.KdFgou.mount: Deactivated successfully. Oct 14 03:51:47 localhost systemd[1]: var-lib-containers-storage-overlay-d99218fba3d511bcb6f74d4bac65ecc65578a06db81bc9c554928597718e237c-merged.mount: Deactivated successfully. Oct 14 03:51:47 localhost podman[27960]: 2025-10-14 07:51:47.808423074 +0000 UTC m=+0.093799091 container remove d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_saha, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 03:51:47 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:47 localhost systemd[1]: libpod-conmon-d12f8c270689f76d98f738efd39a609df37a56263360cc801b54e43d563ce327.scope: Deactivated successfully. Oct 14 03:51:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:51:48 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Oct 14 03:51:48 localhost systemd[1]: Closed Process Core Dump Socket. Oct 14 03:51:48 localhost systemd[1]: Stopping Process Core Dump Socket... Oct 14 03:51:48 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 14 03:51:48 localhost systemd[1]: Reloading. Oct 14 03:51:48 localhost systemd-rc-local-generator[28043]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:51:48 localhost systemd-sysv-generator[28047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:51:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:51:48 localhost systemd[1]: Reloading. Oct 14 03:51:48 localhost systemd-rc-local-generator[28079]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:51:48 localhost systemd-sysv-generator[28082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:51:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:52:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:52:13 localhost podman[28162]: Oct 14 03:52:13 localhost podman[28162]: 2025-10-14 07:52:13.386796438 +0000 UTC m=+0.050948715 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:15 localhost podman[28162]: 2025-10-14 07:52:15.185555537 +0000 UTC m=+1.849707824 container create dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Oct 14 03:52:15 localhost systemd[1]: Started libpod-conmon-dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb.scope. Oct 14 03:52:15 localhost systemd[1]: Started libcrun container. Oct 14 03:52:15 localhost podman[28162]: 2025-10-14 07:52:15.321714424 +0000 UTC m=+1.985866691 container init dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Oct 14 03:52:15 localhost podman[28162]: 2025-10-14 07:52:15.332427511 +0000 UTC m=+1.996579758 container start dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 03:52:15 localhost podman[28162]: 2025-10-14 07:52:15.332755137 +0000 UTC m=+1.996907464 container attach dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 03:52:15 localhost jovial_swartz[28179]: 167 167 Oct 14 03:52:15 localhost systemd[1]: libpod-dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb.scope: Deactivated successfully. Oct 14 03:52:15 localhost podman[28162]: 2025-10-14 07:52:15.336971203 +0000 UTC m=+2.001123490 container died dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, ceph=True, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:52:15 localhost systemd[1]: var-lib-containers-storage-overlay-cbf9f49ba8055e00ef67204fd6384457dc3b1773215e9b20abbe121d3607edfa-merged.mount: Deactivated successfully. Oct 14 03:52:15 localhost podman[28184]: 2025-10-14 07:52:15.416169546 +0000 UTC m=+0.067430757 container remove dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swartz, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Oct 14 03:52:15 localhost systemd[1]: libpod-conmon-dda3410bfef6d055f49874d8ba1c5ae55b5cddea9710a1ceb057e2577a0131cb.scope: Deactivated successfully. Oct 14 03:52:15 localhost systemd[1]: Reloading. Oct 14 03:52:15 localhost systemd-rc-local-generator[28223]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:15 localhost systemd-sysv-generator[28229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:15 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:52:15 localhost systemd[1]: Reloading. Oct 14 03:52:15 localhost systemd-sysv-generator[28264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:15 localhost systemd-rc-local-generator[28261]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:15 localhost systemd[1]: Reached target All Ceph clusters and services. Oct 14 03:52:15 localhost systemd[1]: Reloading. Oct 14 03:52:16 localhost systemd-rc-local-generator[28300]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:16 localhost systemd-sysv-generator[28306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:16 localhost systemd[1]: Reached target Ceph cluster fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 03:52:16 localhost systemd[1]: Reloading. Oct 14 03:52:16 localhost systemd-rc-local-generator[28343]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:16 localhost systemd-sysv-generator[28346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:16 localhost systemd[1]: Reloading. Oct 14 03:52:16 localhost systemd-rc-local-generator[28380]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:16 localhost systemd-sysv-generator[28383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:16 localhost systemd[1]: Created slice Slice /system/ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 03:52:16 localhost systemd[1]: Reached target System Time Set. Oct 14 03:52:16 localhost systemd[1]: Reached target System Time Synchronized. Oct 14 03:52:16 localhost systemd[1]: Starting Ceph crash.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 03:52:16 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:52:16 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 14 03:52:17 localhost podman[28443]: Oct 14 03:52:17 localhost podman[28443]: 2025-10-14 07:52:17.030176049 +0000 UTC m=+0.081504889 container create 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux ) Oct 14 03:52:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7fc0deb005d7ec1f9fff04a6eaf323ca456911df26e5384e866811d6ec46/merged/etc/ceph/ceph.client.crash.np0005486731.keyring supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:17 localhost podman[28443]: 2025-10-14 07:52:17.000722735 +0000 UTC m=+0.052051605 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7fc0deb005d7ec1f9fff04a6eaf323ca456911df26e5384e866811d6ec46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54b7fc0deb005d7ec1f9fff04a6eaf323ca456911df26e5384e866811d6ec46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:17 localhost podman[28443]: 2025-10-14 07:52:17.134030103 +0000 UTC m=+0.185358943 container init 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Oct 14 03:52:17 localhost podman[28443]: 2025-10-14 07:52:17.149738823 +0000 UTC m=+0.201067693 container start 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, distribution-scope=public, vcs-type=git, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_BRANCH=main) Oct 14 03:52:17 localhost bash[28443]: 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb Oct 14 03:52:17 localhost systemd[1]: Started Ceph crash.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: INFO:ceph-crash:pinging cluster to exercise our key Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.305+0000 7ff5a5fe3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.305+0000 7ff5a5fe3640 -1 AuthRegistry(0x7ff5a00680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.306+0000 7ff5a5fe3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.306+0000 7ff5a5fe3640 -1 AuthRegistry(0x7ff5a5fe2000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.317+0000 7ff59ffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.319+0000 7ff59f7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.320+0000 7ff59effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: 2025-10-14T07:52:17.320+0000 7ff5a5fe3640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: [errno 13] RADOS permission denied (error connecting to the cluster) Oct 14 03:52:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731[28458]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Oct 14 03:52:25 localhost podman[28714]: Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.463959119 +0000 UTC m=+0.100340586 container create efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, release=553, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12) Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.412066219 +0000 UTC m=+0.048447686 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:25 localhost systemd[1]: Started libpod-conmon-efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d.scope. Oct 14 03:52:25 localhost systemd[1]: Started libcrun container. Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.601915727 +0000 UTC m=+0.238297194 container init efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.611795003 +0000 UTC m=+0.248176460 container start efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, version=7, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.61204466 +0000 UTC m=+0.248426117 container attach efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 03:52:25 localhost dazzling_kirch[28729]: 167 167 Oct 14 03:52:25 localhost systemd[1]: libpod-efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d.scope: Deactivated successfully. Oct 14 03:52:25 localhost podman[28714]: 2025-10-14 07:52:25.615550179 +0000 UTC m=+0.251931676 container died efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, release=553, GIT_BRANCH=main, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Oct 14 03:52:25 localhost systemd[1]: var-lib-containers-storage-overlay-3e1cc65f93847809e06d5fc6fd57360eb81589e50fdc06b7b40e5e33c990633a-merged.mount: Deactivated successfully. Oct 14 03:52:25 localhost podman[28734]: 2025-10-14 07:52:25.709596907 +0000 UTC m=+0.079608016 container remove efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_kirch, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, version=7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=) Oct 14 03:52:25 localhost systemd[1]: libpod-conmon-efe1608bd19d240d730bcdc76a03420e1a50a7f849b498eb3cbb3ef21f67a93d.scope: Deactivated successfully. Oct 14 03:52:25 localhost podman[28754]: Oct 14 03:52:25 localhost podman[28754]: 2025-10-14 07:52:25.916607106 +0000 UTC m=+0.074074832 container create b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64) Oct 14 03:52:25 localhost systemd[1]: Started libpod-conmon-b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236.scope. Oct 14 03:52:25 localhost systemd[1]: Started libcrun container. Oct 14 03:52:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:25 localhost podman[28754]: 2025-10-14 07:52:25.888089629 +0000 UTC m=+0.045557395 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:26 localhost podman[28754]: 2025-10-14 07:52:26.042518596 +0000 UTC m=+0.199986342 container init b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 03:52:26 localhost podman[28754]: 2025-10-14 07:52:26.052344981 +0000 UTC m=+0.209812727 container start b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, io.openshift.expose-services=, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Oct 14 03:52:26 localhost podman[28754]: 2025-10-14 07:52:26.052628389 +0000 UTC m=+0.210096145 container attach b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 03:52:26 localhost sharp_brattain[28770]: --> passed data devices: 0 physical, 2 LVM Oct 14 03:52:26 localhost sharp_brattain[28770]: --> relative data size: 1.0 Oct 14 03:52:26 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 14 03:52:26 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8798be35-0a9e-4e0d-be22-4c39dcfea81e Oct 14 03:52:27 localhost lvm[28824]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 14 03:52:27 localhost lvm[28824]: VG ceph_vg0 finished Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap Oct 14 03:52:27 localhost sharp_brattain[28770]: stderr: got monmap epoch 3 Oct 14 03:52:27 localhost sharp_brattain[28770]: --> Creating keyring file for osd.2 Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Oct 14 03:52:27 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 8798be35-0a9e-4e0d-be22-4c39dcfea81e --setuser ceph --setgroup ceph Oct 14 03:52:29 localhost sharp_brattain[28770]: stderr: 2025-10-14T07:52:27.758+0000 7f846e9e2a80 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 14 03:52:29 localhost sharp_brattain[28770]: stderr: 2025-10-14T07:52:27.758+0000 7f846e9e2a80 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid Oct 14 03:52:29 localhost sharp_brattain[28770]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Oct 14 03:52:29 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:29 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:30 localhost sharp_brattain[28770]: --> ceph-volume lvm activate successful for osd ID: 2 Oct 14 03:52:30 localhost sharp_brattain[28770]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e8f18853-0710-4071-a9de-3872345d6a39 Oct 14 03:52:30 localhost lvm[29755]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 14 03:52:30 localhost lvm[29755]: VG ceph_vg1 finished Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:30 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap Oct 14 03:52:31 localhost sharp_brattain[28770]: stderr: got monmap epoch 3 Oct 14 03:52:31 localhost sharp_brattain[28770]: --> Creating keyring file for osd.4 Oct 14 03:52:31 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Oct 14 03:52:31 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Oct 14 03:52:31 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid e8f18853-0710-4071-a9de-3872345d6a39 --setuser ceph --setgroup ceph Oct 14 03:52:33 localhost sharp_brattain[28770]: stderr: 2025-10-14T07:52:31.352+0000 7f40cbc0ea80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 14 03:52:33 localhost sharp_brattain[28770]: stderr: 2025-10-14T07:52:31.352+0000 7f40cbc0ea80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid Oct 14 03:52:33 localhost sharp_brattain[28770]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 14 03:52:33 localhost sharp_brattain[28770]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:33 localhost sharp_brattain[28770]: --> ceph-volume lvm activate successful for osd ID: 4 Oct 14 03:52:33 localhost sharp_brattain[28770]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Oct 14 03:52:33 localhost systemd[1]: libpod-b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236.scope: Deactivated successfully. Oct 14 03:52:33 localhost systemd[1]: libpod-b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236.scope: Consumed 3.860s CPU time. Oct 14 03:52:33 localhost podman[28754]: 2025-10-14 07:52:33.985518413 +0000 UTC m=+8.142986149 container died b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., ceph=True, release=553, io.openshift.expose-services=) Oct 14 03:52:34 localhost systemd[1]: var-lib-containers-storage-overlay-f1db962c8727d4e9c5e5c9e0e94e4472180cef61dd9473e3960808a9298acfd9-merged.mount: Deactivated successfully. Oct 14 03:52:34 localhost podman[30654]: 2025-10-14 07:52:34.064338638 +0000 UTC m=+0.070579115 container remove b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_brattain, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=) Oct 14 03:52:34 localhost systemd[1]: libpod-conmon-b1a9f51ecf1ac12388ed23b29dd6df94121624fccb58d681c71d794af44f4236.scope: Deactivated successfully. Oct 14 03:52:34 localhost podman[30736]: Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.800396848 +0000 UTC m=+0.070716818 container create 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 03:52:34 localhost systemd[1]: Started libpod-conmon-73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01.scope. Oct 14 03:52:34 localhost systemd[1]: Started libcrun container. Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.869266614 +0000 UTC m=+0.139586584 container init 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True) Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.772457407 +0000 UTC m=+0.042777387 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.879731107 +0000 UTC m=+0.150051067 container start 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.88020164 +0000 UTC m=+0.150521600 container attach 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., release=553, version=7, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 14 03:52:34 localhost quirky_yalow[30751]: 167 167 Oct 14 03:52:34 localhost systemd[1]: libpod-73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01.scope: Deactivated successfully. Oct 14 03:52:34 localhost podman[30736]: 2025-10-14 07:52:34.884099558 +0000 UTC m=+0.154419538 container died 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7) Oct 14 03:52:34 localhost podman[30757]: 2025-10-14 07:52:34.968339093 +0000 UTC m=+0.074133023 container remove 73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_yalow, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 03:52:34 localhost systemd[1]: libpod-conmon-73a5527fb73bd9db6ffcd67c66b068aacc961512cd290ec79e0a6812e968fc01.scope: Deactivated successfully. Oct 14 03:52:35 localhost systemd[1]: var-lib-containers-storage-overlay-2fec35fb65676d9f8982b8966b57cbac1963adadb7b0bfc28c754b8cde49e188-merged.mount: Deactivated successfully. Oct 14 03:52:35 localhost podman[30776]: Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.172009928 +0000 UTC m=+0.062473027 container create 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public) Oct 14 03:52:35 localhost systemd[1]: Started libpod-conmon-101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576.scope. Oct 14 03:52:35 localhost systemd[1]: Started libcrun container. Oct 14 03:52:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e3c96e288ac66fe008a98b5aadf608e994227e7b17a3a94fbe2c4fd090d1897/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e3c96e288ac66fe008a98b5aadf608e994227e7b17a3a94fbe2c4fd090d1897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e3c96e288ac66fe008a98b5aadf608e994227e7b17a3a94fbe2c4fd090d1897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.149950982 +0000 UTC m=+0.040414111 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.264483204 +0000 UTC m=+0.154946303 container init 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, version=7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux , release=553, ceph=True, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main) Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.274939887 +0000 UTC m=+0.165402986 container start 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, architecture=x86_64, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55) Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.275375129 +0000 UTC m=+0.165838278 container attach 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Oct 14 03:52:35 localhost nervous_thompson[30791]: { Oct 14 03:52:35 localhost nervous_thompson[30791]: "2": [ Oct 14 03:52:35 localhost nervous_thompson[30791]: { Oct 14 03:52:35 localhost nervous_thompson[30791]: "devices": [ Oct 14 03:52:35 localhost nervous_thompson[30791]: "/dev/loop3" Oct 14 03:52:35 localhost nervous_thompson[30791]: ], Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_name": "ceph_lv0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_size": "7511998464", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=hQHp5r-jAi0-oaKi-eEvM-0O0y-wJkc-UexPHz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fcadf6e2-9176-5818-a8d0-37b19acf8eaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8798be35-0a9e-4e0d-be22-4c39dcfea81e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_uuid": "hQHp5r-jAi0-oaKi-eEvM-0O0y-wJkc-UexPHz", Oct 14 03:52:35 localhost nervous_thompson[30791]: "name": "ceph_lv0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "path": "/dev/ceph_vg0/ceph_lv0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "tags": { Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.block_uuid": "hQHp5r-jAi0-oaKi-eEvM-0O0y-wJkc-UexPHz", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cephx_lockbox_secret": "", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cluster_fsid": "fcadf6e2-9176-5818-a8d0-37b19acf8eaf", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cluster_name": "ceph", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.crush_device_class": "", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.encrypted": "0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osd_fsid": "8798be35-0a9e-4e0d-be22-4c39dcfea81e", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osd_id": "2", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osdspec_affinity": "default_drive_group", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.type": "block", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.vdo": "0" Oct 14 03:52:35 localhost nervous_thompson[30791]: }, Oct 14 03:52:35 localhost nervous_thompson[30791]: "type": "block", Oct 14 03:52:35 localhost nervous_thompson[30791]: "vg_name": "ceph_vg0" Oct 14 03:52:35 localhost nervous_thompson[30791]: } Oct 14 03:52:35 localhost nervous_thompson[30791]: ], Oct 14 03:52:35 localhost nervous_thompson[30791]: "4": [ Oct 14 03:52:35 localhost nervous_thompson[30791]: { Oct 14 03:52:35 localhost nervous_thompson[30791]: "devices": [ Oct 14 03:52:35 localhost nervous_thompson[30791]: "/dev/loop4" Oct 14 03:52:35 localhost nervous_thompson[30791]: ], Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_name": "ceph_lv1", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_size": "7511998464", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p83gus-3M4H-E6wy-4xUQ-X8my-I6sS-zrnbjO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fcadf6e2-9176-5818-a8d0-37b19acf8eaf,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e8f18853-0710-4071-a9de-3872345d6a39,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "lv_uuid": "p83gus-3M4H-E6wy-4xUQ-X8my-I6sS-zrnbjO", Oct 14 03:52:35 localhost nervous_thompson[30791]: "name": "ceph_lv1", Oct 14 03:52:35 localhost nervous_thompson[30791]: "path": "/dev/ceph_vg1/ceph_lv1", Oct 14 03:52:35 localhost nervous_thompson[30791]: "tags": { Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.block_uuid": "p83gus-3M4H-E6wy-4xUQ-X8my-I6sS-zrnbjO", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cephx_lockbox_secret": "", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cluster_fsid": "fcadf6e2-9176-5818-a8d0-37b19acf8eaf", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.cluster_name": "ceph", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.crush_device_class": "", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.encrypted": "0", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osd_fsid": "e8f18853-0710-4071-a9de-3872345d6a39", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osd_id": "4", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.osdspec_affinity": "default_drive_group", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.type": "block", Oct 14 03:52:35 localhost nervous_thompson[30791]: "ceph.vdo": "0" Oct 14 03:52:35 localhost nervous_thompson[30791]: }, Oct 14 03:52:35 localhost nervous_thompson[30791]: "type": "block", Oct 14 03:52:35 localhost nervous_thompson[30791]: "vg_name": "ceph_vg1" Oct 14 03:52:35 localhost nervous_thompson[30791]: } Oct 14 03:52:35 localhost nervous_thompson[30791]: ] Oct 14 03:52:35 localhost nervous_thompson[30791]: } Oct 14 03:52:35 localhost systemd[1]: libpod-101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576.scope: Deactivated successfully. Oct 14 03:52:35 localhost podman[30776]: 2025-10-14 07:52:35.624926832 +0000 UTC m=+0.515390041 container died 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public) Oct 14 03:52:35 localhost podman[30800]: 2025-10-14 07:52:35.718576781 +0000 UTC m=+0.081597373 container remove 101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_thompson, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, release=553) Oct 14 03:52:35 localhost systemd[1]: libpod-conmon-101e7ee5c65e32aa681d28fa06f52eacbe2d8a86f67815ec158b463796501576.scope: Deactivated successfully. Oct 14 03:52:36 localhost systemd[1]: tmp-crun.fd1L6J.mount: Deactivated successfully. Oct 14 03:52:36 localhost systemd[1]: var-lib-containers-storage-overlay-8e3c96e288ac66fe008a98b5aadf608e994227e7b17a3a94fbe2c4fd090d1897-merged.mount: Deactivated successfully. Oct 14 03:52:36 localhost podman[30885]: Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.507006505 +0000 UTC m=+0.078195087 container create c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Oct 14 03:52:36 localhost systemd[1]: Started libpod-conmon-c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945.scope. Oct 14 03:52:36 localhost systemd[1]: Started libcrun container. Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.573047612 +0000 UTC m=+0.144236204 container init c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, distribution-scope=public, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.475247077 +0000 UTC m=+0.046435699 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.583627728 +0000 UTC m=+0.154816320 container start c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Oct 14 03:52:36 localhost hungry_varahamihira[30900]: 167 167 Oct 14 03:52:36 localhost systemd[1]: libpod-c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945.scope: Deactivated successfully. Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.584153932 +0000 UTC m=+0.155342554 container attach c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, version=7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, ceph=True) Oct 14 03:52:36 localhost podman[30885]: 2025-10-14 07:52:36.587854335 +0000 UTC m=+0.159042917 container died c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, release=553, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 03:52:36 localhost podman[30905]: 2025-10-14 07:52:36.677899003 +0000 UTC m=+0.079021930 container remove c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_varahamihira, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Oct 14 03:52:36 localhost systemd[1]: libpod-conmon-c3a98d6b84cc36de7baf86b92c2bb9ff20be08d7356996ab07eec9ceb4fcf945.scope: Deactivated successfully. Oct 14 03:52:36 localhost podman[30933]: Oct 14 03:52:36 localhost podman[30933]: 2025-10-14 07:52:36.970208107 +0000 UTC m=+0.068566209 container create 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main) Oct 14 03:52:37 localhost systemd[1]: Started libpod-conmon-282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460.scope. Oct 14 03:52:37 localhost systemd[1]: Started libcrun container. Oct 14 03:52:37 localhost systemd[1]: var-lib-containers-storage-overlay-1c79c325071e1caa7b68a23e064741e1e6c748277a47223c55c77eefd5ef795c-merged.mount: Deactivated successfully. Oct 14 03:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:37 localhost podman[30933]: 2025-10-14 07:52:36.947888512 +0000 UTC m=+0.046246594 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:37 localhost podman[30933]: 2025-10-14 07:52:37.098440192 +0000 UTC m=+0.196798304 container init 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Oct 14 03:52:37 localhost podman[30933]: 2025-10-14 07:52:37.113366059 +0000 UTC m=+0.211724171 container start 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Oct 14 03:52:37 localhost podman[30933]: 2025-10-14 07:52:37.113685378 +0000 UTC m=+0.212043500 container attach 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 03:52:37 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test[30948]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 14 03:52:37 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test[30948]: [--no-systemd] [--no-tmpfs] Oct 14 03:52:37 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test[30948]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 14 03:52:37 localhost systemd[1]: libpod-282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460.scope: Deactivated successfully. Oct 14 03:52:37 localhost podman[30933]: 2025-10-14 07:52:37.357935877 +0000 UTC m=+0.456294009 container died 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph) Oct 14 03:52:37 localhost systemd[1]: tmp-crun.pQOJsr.mount: Deactivated successfully. Oct 14 03:52:37 localhost systemd-journald[618]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 14 03:52:37 localhost systemd-journald[618]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 03:52:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 03:52:37 localhost systemd[1]: var-lib-containers-storage-overlay-e1846086cdd831e961c8837ae118cfb0dca270162e90827436d185c0dd70518d-merged.mount: Deactivated successfully. Oct 14 03:52:37 localhost podman[30953]: 2025-10-14 07:52:37.46103123 +0000 UTC m=+0.093259648 container remove 282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate-test, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph) Oct 14 03:52:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 03:52:37 localhost systemd[1]: libpod-conmon-282bc1b24356bc0111419ed3e0fbc1c429ef1f077fd353d7ad9969932008a460.scope: Deactivated successfully. Oct 14 03:52:37 localhost systemd[1]: Reloading. Oct 14 03:52:37 localhost systemd-sysv-generator[31012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:37 localhost systemd-rc-local-generator[31009]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:38 localhost systemd[1]: Reloading. Oct 14 03:52:38 localhost systemd-sysv-generator[31052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:38 localhost systemd-rc-local-generator[31048]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:38 localhost systemd[1]: Starting Ceph osd.2 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 03:52:38 localhost podman[31118]: Oct 14 03:52:38 localhost podman[31118]: 2025-10-14 07:52:38.661384833 +0000 UTC m=+0.073315682 container create db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, distribution-scope=public) Oct 14 03:52:38 localhost systemd[1]: Started libcrun container. Oct 14 03:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:38 localhost podman[31118]: 2025-10-14 07:52:38.632479134 +0000 UTC m=+0.044409983 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:38 localhost podman[31118]: 2025-10-14 07:52:38.783045214 +0000 UTC m=+0.194976063 container init db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:52:38 localhost systemd[1]: tmp-crun.b4tHYV.mount: Deactivated successfully. Oct 14 03:52:38 localhost podman[31118]: 2025-10-14 07:52:38.796320025 +0000 UTC m=+0.208250874 container start db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:52:38 localhost podman[31118]: 2025-10-14 07:52:38.796657624 +0000 UTC m=+0.208588523 container attach db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main) Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:39 localhost bash[31118]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Oct 14 03:52:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate[31132]: --> ceph-volume raw activate successful for osd ID: 2 Oct 14 03:52:39 localhost bash[31118]: --> ceph-volume raw activate successful for osd ID: 2 Oct 14 03:52:39 localhost systemd[1]: libpod-db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d.scope: Deactivated successfully. Oct 14 03:52:39 localhost podman[31118]: 2025-10-14 07:52:39.498061095 +0000 UTC m=+0.909991974 container died db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Oct 14 03:52:39 localhost podman[31252]: 2025-10-14 07:52:39.595652254 +0000 UTC m=+0.082860418 container remove db7cf518cbf42c49b404234b50a7752b29a3c787bbc686bf78eeb681be51179d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2-activate, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 03:52:39 localhost systemd[1]: var-lib-containers-storage-overlay-94709f8fe1df3f907ae77223b7ece89d87e52a4374d82b392390090e0dbf8eb6-merged.mount: Deactivated successfully. Oct 14 03:52:39 localhost podman[31312]: Oct 14 03:52:39 localhost podman[31312]: 2025-10-14 07:52:39.889442829 +0000 UTC m=+0.059108724 container create 1322f9809857891bc586306b7d3fa6f2d0a7642b9a7a3a5b7ba129ac3594acec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, ceph=True, vendor=Red Hat, Inc.) Oct 14 03:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc4553e873e578e88580cf8370a473eff0910fbeb22918dc0c11c8be62ec842/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc4553e873e578e88580cf8370a473eff0910fbeb22918dc0c11c8be62ec842/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:39 localhost podman[31312]: 2025-10-14 07:52:39.860392647 +0000 UTC m=+0.030058552 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc4553e873e578e88580cf8370a473eff0910fbeb22918dc0c11c8be62ec842/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc4553e873e578e88580cf8370a473eff0910fbeb22918dc0c11c8be62ec842/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc4553e873e578e88580cf8370a473eff0910fbeb22918dc0c11c8be62ec842/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:39 localhost podman[31312]: 2025-10-14 07:52:39.996635556 +0000 UTC m=+0.166301451 container init 1322f9809857891bc586306b7d3fa6f2d0a7642b9a7a3a5b7ba129ac3594acec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, ceph=True, name=rhceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553) Oct 14 03:52:40 localhost systemd[1]: tmp-crun.X2kk4B.mount: Deactivated successfully. Oct 14 03:52:40 localhost podman[31312]: 2025-10-14 07:52:40.009353462 +0000 UTC m=+0.179019367 container start 1322f9809857891bc586306b7d3fa6f2d0a7642b9a7a3a5b7ba129ac3594acec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Oct 14 03:52:40 localhost bash[31312]: 1322f9809857891bc586306b7d3fa6f2d0a7642b9a7a3a5b7ba129ac3594acec Oct 14 03:52:40 localhost systemd[1]: Started Ceph osd.2 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 03:52:40 localhost ceph-osd[31330]: set uid:gid to 167:167 (ceph:ceph) Oct 14 03:52:40 localhost ceph-osd[31330]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 14 03:52:40 localhost ceph-osd[31330]: pidfile_write: ignore empty --pid-file Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:40 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:40 localhost ceph-osd[31330]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) close Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) close Oct 14 03:52:40 localhost ceph-osd[31330]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal Oct 14 03:52:40 localhost ceph-osd[31330]: load: jerasure load: lrc Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:40 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) close Oct 14 03:52:40 localhost podman[31424]: Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.842229209 +0000 UTC m=+0.079016690 container create a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:40 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:40 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) close Oct 14 03:52:40 localhost systemd[1]: Started libpod-conmon-a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513.scope. Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.811199421 +0000 UTC m=+0.047986952 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:40 localhost systemd[1]: Started libcrun container. Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.936616268 +0000 UTC m=+0.173403739 container init a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.948546931 +0000 UTC m=+0.185334402 container start a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, release=553, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.94885577 +0000 UTC m=+0.185643311 container attach a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, release=553, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 03:52:40 localhost trusting_taussig[31445]: 167 167 Oct 14 03:52:40 localhost systemd[1]: libpod-a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513.scope: Deactivated successfully. Oct 14 03:52:40 localhost podman[31424]: 2025-10-14 07:52:40.955170296 +0000 UTC m=+0.191957787 container died a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Oct 14 03:52:41 localhost systemd[1]: var-lib-containers-storage-overlay-bd34a121e5aefe4f4822b72c70dbbaf554bf919c02cca25402320f2d05448dd5-merged.mount: Deactivated successfully. Oct 14 03:52:41 localhost podman[31450]: 2025-10-14 07:52:41.043369963 +0000 UTC m=+0.079652209 container remove a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_taussig, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, name=rhceph, distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 03:52:41 localhost systemd[1]: libpod-conmon-a5f80a95d2cf35451b3edc18b5d5b73dadde595da47c5a4d27b970e3bc3bd513.scope: Deactivated successfully. Oct 14 03:52:41 localhost ceph-osd[31330]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d8e00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs mount Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs mount shared_bdev_used = 0 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: RocksDB version: 7.9.2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Git sha 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DB SUMMARY Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DB Session ID: R50W6UNHC64WMJ8VRA3H Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: CURRENT file: CURRENT Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: IDENTITY file: IDENTITY Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.error_if_exists: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.create_if_missing: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.env: 0x55644db6ccb0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.fs: LegacyFileSystem Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.info_log: 0x55644e87a780 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.statistics: (nil) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_fsync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_log_file_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_fallocate: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_direct_reads: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.create_missing_column_families: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_log_dir: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_dir: db.wal Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.advise_random_on_open: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_manager: 0x55644d8c3400 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.rate_limiter: (nil) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.unordered_write: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.row_cache: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.two_write_queues: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.manual_wal_flush: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_compression: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.atomic_flush: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.log_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_host_id: __hostname__ Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_jobs: 4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_compactions: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_subcompactions: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_open_files: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_flushes: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Compression algorithms supported: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZSTD supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kXpressCompression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZlibCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87a940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87ab60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87ab60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e87ab60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1775588-07b6-46d1-9694-a03ea9c45024 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361175746, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361176042, "job": 1, "event": "recovery_finished"} Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000 Oct 14 03:52:41 localhost ceph-osd[31330]: freelist init Oct 14 03:52:41 localhost ceph-osd[31330]: freelist _read_cfg Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs umount Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) close Oct 14 03:52:41 localhost podman[31673]: Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.393153683 +0000 UTC m=+0.080492152 container create 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Oct 14 03:52:41 localhost ceph-osd[31330]: bdev(0x55644d8d9180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs mount Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 14 03:52:41 localhost ceph-osd[31330]: bluefs mount shared_bdev_used = 4718592 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: RocksDB version: 7.9.2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Git sha 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DB SUMMARY Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DB Session ID: R50W6UNHC64WMJ8VRA3G Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: CURRENT file: CURRENT Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: IDENTITY file: IDENTITY Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.error_if_exists: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.create_if_missing: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.env: 0x55644e71c690 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.fs: LegacyFileSystem Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.info_log: 0x55644e8fa920 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.statistics: (nil) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_fsync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_log_file_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_fallocate: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_direct_reads: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.create_missing_column_families: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_log_dir: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_dir: db.wal Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.advise_random_on_open: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_manager: 0x55644d8c34a0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.rate_limiter: (nil) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.unordered_write: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.row_cache: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.two_write_queues: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.manual_wal_flush: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_compression: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.atomic_flush: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.log_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.db_host_id: __hostname__ Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_jobs: 4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_compactions: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_subcompactions: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_open_files: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_background_flushes: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Compression algorithms supported: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZSTD supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kXpressCompression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kZlibCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fab40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b0850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fac80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fac80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.merge_operator: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55644e8fac80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55644d8b02d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:41 localhost systemd[1]: Started libpod-conmon-7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b.scope. Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression: LZ4 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.num_levels: 7 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1775588-07b6-46d1-9694-a03ea9c45024 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361441531, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361447406, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428361, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1775588-07b6-46d1-9694-a03ea9c45024", "db_session_id": "R50W6UNHC64WMJ8VRA3G", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361451466, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428361, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1775588-07b6-46d1-9694-a03ea9c45024", "db_session_id": "R50W6UNHC64WMJ8VRA3G", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361455631, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428361, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1775588-07b6-46d1-9694-a03ea9c45024", "db_session_id": "R50W6UNHC64WMJ8VRA3G", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428361459632, "job": 1, "event": "recovery_finished"} Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.361087506 +0000 UTC m=+0.048425795 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 14 03:52:41 localhost systemd[1]: Started libcrun container. Oct 14 03:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55644d910700 Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: DB pointer 0x55644e7d1a00 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4 Oct 14 03:52:41 localhost ceph-osd[31330]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 03:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 460.80 MB usag Oct 14 03:52:41 localhost ceph-osd[31330]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 14 03:52:41 localhost ceph-osd[31330]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 14 03:52:41 localhost ceph-osd[31330]: _get_class not permitted to load lua Oct 14 03:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:41 localhost ceph-osd[31330]: _get_class not permitted to load sdk Oct 14 03:52:41 localhost ceph-osd[31330]: _get_class not permitted to load test_remote_reads Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 load_pgs Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 load_pgs opened 0 pgs Oct 14 03:52:41 localhost ceph-osd[31330]: osd.2 0 log_to_monitors true Oct 14 03:52:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2[31326]: 2025-10-14T07:52:41.504+0000 7efd86302a80 -1 osd.2 0 log_to_monitors true Oct 14 03:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.550747499 +0000 UTC m=+0.238085768 container init 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, RELEASE=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.565213793 +0000 UTC m=+0.252552052 container start 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, vcs-type=git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.565502051 +0000 UTC m=+0.252840320 container attach 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 03:52:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test[31870]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 14 03:52:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test[31870]: [--no-systemd] [--no-tmpfs] Oct 14 03:52:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test[31870]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 14 03:52:41 localhost systemd[1]: libpod-7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b.scope: Deactivated successfully. Oct 14 03:52:41 localhost podman[31673]: 2025-10-14 07:52:41.808688991 +0000 UTC m=+0.496027270 container died 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 03:52:41 localhost systemd[1]: tmp-crun.q1UplU.mount: Deactivated successfully. Oct 14 03:52:41 localhost systemd[1]: var-lib-containers-storage-overlay-0154868c838982341a7d78d4e223f79cdb884bd2626bbf22660b27e9b93d9d20-merged.mount: Deactivated successfully. Oct 14 03:52:41 localhost podman[31908]: 2025-10-14 07:52:41.921784263 +0000 UTC m=+0.099393470 container remove 7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate-test, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 03:52:41 localhost systemd[1]: libpod-conmon-7957dc2f2a3c0b63d3122a9e8a4ad123306e9c918e49e299ca7d0090aaae736b.scope: Deactivated successfully. Oct 14 03:52:42 localhost systemd[1]: Reloading. Oct 14 03:52:42 localhost systemd-sysv-generator[31968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:42 localhost systemd-rc-local-generator[31962]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:42 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 14 03:52:42 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 14 03:52:42 localhost systemd[1]: Reloading. Oct 14 03:52:42 localhost systemd-rc-local-generator[32001]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:52:42 localhost systemd-sysv-generator[32007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:52:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:52:42 localhost systemd[1]: Starting Ceph osd.4 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 done with init, starting boot process Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 start_boot Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 14 03:52:42 localhost ceph-osd[31330]: osd.2 0 bench count 12288000 bsize 4 KiB Oct 14 03:52:43 localhost podman[32065]: Oct 14 03:52:43 localhost podman[32065]: 2025-10-14 07:52:43.143677088 +0000 UTC m=+0.086819208 container create dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:52:43 localhost podman[32065]: 2025-10-14 07:52:43.102684622 +0000 UTC m=+0.045826712 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:43 localhost systemd[1]: Started libcrun container. Oct 14 03:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:43 localhost podman[32065]: 2025-10-14 07:52:43.275121662 +0000 UTC m=+0.218263772 container init dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph) Oct 14 03:52:43 localhost podman[32065]: 2025-10-14 07:52:43.286717388 +0000 UTC m=+0.229859498 container start dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Oct 14 03:52:43 localhost podman[32065]: 2025-10-14 07:52:43.286959794 +0000 UTC m=+0.230101904 container attach dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True) Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:43 localhost bash[32065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 14 03:52:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate[32080]: --> ceph-volume raw activate successful for osd ID: 4 Oct 14 03:52:43 localhost bash[32065]: --> ceph-volume raw activate successful for osd ID: 4 Oct 14 03:52:44 localhost systemd[1]: libpod-dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca.scope: Deactivated successfully. Oct 14 03:52:44 localhost podman[32065]: 2025-10-14 07:52:44.014396363 +0000 UTC m=+0.957538473 container died dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553) Oct 14 03:52:44 localhost systemd[1]: var-lib-containers-storage-overlay-19f6aba450671cf3a1431bd86126932d1256a9a87ce913ff37a04d918fcbe7a0-merged.mount: Deactivated successfully. Oct 14 03:52:44 localhost podman[32200]: 2025-10-14 07:52:44.120122119 +0000 UTC m=+0.099064250 container remove dc1e97035b0b6901ad915558b8868ab3777eab290685fe431f4c7ed8232aaaca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4-activate, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Oct 14 03:52:44 localhost podman[32264]: Oct 14 03:52:44 localhost podman[32264]: 2025-10-14 07:52:44.452865513 +0000 UTC m=+0.092018314 container create cab44c8ad2ce45c129f80dd111319001b76fd7c8aa5f59639524c2d3ccb54fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Oct 14 03:52:44 localhost podman[32264]: 2025-10-14 07:52:44.410293903 +0000 UTC m=+0.049446734 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae184de23b7f32c061d1891c65d87dc653765ad5a6e68b36864e2dd49b7fd0e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae184de23b7f32c061d1891c65d87dc653765ad5a6e68b36864e2dd49b7fd0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae184de23b7f32c061d1891c65d87dc653765ad5a6e68b36864e2dd49b7fd0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae184de23b7f32c061d1891c65d87dc653765ad5a6e68b36864e2dd49b7fd0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae184de23b7f32c061d1891c65d87dc653765ad5a6e68b36864e2dd49b7fd0e/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:44 localhost podman[32264]: 2025-10-14 07:52:44.582482117 +0000 UTC m=+0.221634908 container init cab44c8ad2ce45c129f80dd111319001b76fd7c8aa5f59639524c2d3ccb54fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 03:52:44 localhost systemd[1]: tmp-crun.IpKxKG.mount: Deactivated successfully. Oct 14 03:52:44 localhost podman[32264]: 2025-10-14 07:52:44.607600029 +0000 UTC m=+0.246752840 container start cab44c8ad2ce45c129f80dd111319001b76fd7c8aa5f59639524c2d3ccb54fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, release=553, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 03:52:44 localhost bash[32264]: cab44c8ad2ce45c129f80dd111319001b76fd7c8aa5f59639524c2d3ccb54fea Oct 14 03:52:44 localhost systemd[1]: Started Ceph osd.4 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 03:52:44 localhost ceph-osd[32282]: set uid:gid to 167:167 (ceph:ceph) Oct 14 03:52:44 localhost ceph-osd[32282]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 14 03:52:44 localhost ceph-osd[32282]: pidfile_write: ignore empty --pid-file Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:44 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:44 localhost ceph-osd[32282]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) close Oct 14 03:52:44 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) close Oct 14 03:52:45 localhost ceph-osd[32282]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal Oct 14 03:52:45 localhost ceph-osd[32282]: load: jerasure load: lrc Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) close Oct 14 03:52:45 localhost podman[32370]: Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.46704368 +0000 UTC m=+0.082450957 container create 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, io.buildah.version=1.33.12, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) close Oct 14 03:52:45 localhost systemd[1]: Started libpod-conmon-4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a.scope. Oct 14 03:52:45 localhost systemd[1]: Started libcrun container. Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.431441604 +0000 UTC m=+0.046848891 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.539988549 +0000 UTC m=+0.155395806 container init 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 03:52:45 localhost vibrant_goldstine[32390]: 167 167 Oct 14 03:52:45 localhost systemd[1]: libpod-4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a.scope: Deactivated successfully. Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.555645127 +0000 UTC m=+0.171052384 container start 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7) Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.556040428 +0000 UTC m=+0.171447715 container attach 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Oct 14 03:52:45 localhost podman[32370]: 2025-10-14 07:52:45.557497778 +0000 UTC m=+0.172905125 container died 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 03:52:45 localhost systemd[1]: var-lib-containers-storage-overlay-73d4a486b69672df16d7f949c31fca462f3f49b4dd2467747d61c7cabf76d708-merged.mount: Deactivated successfully. Oct 14 03:52:45 localhost podman[32395]: 2025-10-14 07:52:45.661522787 +0000 UTC m=+0.097355302 container remove 4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldstine, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-09-24T08:57:55, distribution-scope=public, release=553) Oct 14 03:52:45 localhost systemd[1]: libpod-conmon-4d00e7dc2d50f1ea9f1f00f95579e8b87dea32aedba294b4635ac8e9bbae906a.scope: Deactivated successfully. Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.621 iops: 7326.977 elapsed_sec: 0.409 Oct 14 03:52:45 localhost ceph-osd[31330]: log_channel(cluster) log [WRN] : OSD bench result of 7326.977314 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 0 waiting for initial osdmap Oct 14 03:52:45 localhost ceph-osd[32282]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 14 03:52:45 localhost ceph-osd[32282]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 14 03:52:45 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2[31326]: 2025-10-14T07:52:45.747+0000 7efd82281640 -1 osd.2 0 waiting for initial osdmap Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:45 localhost ceph-osd[32282]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 14 03:52:45 localhost ceph-osd[32282]: bluefs mount Oct 14 03:52:45 localhost ceph-osd[32282]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 14 03:52:45 localhost ceph-osd[32282]: bluefs mount shared_bdev_used = 0 Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: RocksDB version: 7.9.2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Git sha 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: DB SUMMARY Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: DB Session ID: PMM3GBU9H2LAL53DYJOR Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: CURRENT file: CURRENT Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: IDENTITY file: IDENTITY Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.error_if_exists: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.create_if_missing: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.env: 0x557c1d5aec40 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.fs: LegacyFileSystem Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.info_log: 0x557c1e2a6340 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.statistics: (nil) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.use_fsync: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_log_file_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_fallocate: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.use_direct_reads: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.create_missing_column_families: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.db_log_dir: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.wal_dir: db.wal Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.advise_random_on_open: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_manager: 0x557c1d304140 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.rate_limiter: (nil) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.unordered_write: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.row_cache: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.wal_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.two_write_queues: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.manual_wal_flush: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.wal_compression: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.atomic_flush: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.log_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.db_host_id: __hostname__ Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_background_jobs: 4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_background_compactions: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_subcompactions: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_open_files: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bytes_per_sync: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_background_flushes: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Compression algorithms supported: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kZSTD supported: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kXpressCompression supported: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kZlibCompression supported: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 crush map has features 288514050185494528, adjusting msgr requires for clients Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 crush map has features 3314932999778484224, adjusting msgr requires for osds Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 check_osdmap_features require_osd_release unknown -> reef Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 set_numa_affinity not setting numa affinity Oct 14 03:52:45 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-2[31326]: 2025-10-14T07:52:45.766+0000 7efd7d8ab640 -1 osd.2 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 14 03:52:45 localhost ceph-osd[31330]: osd.2 10 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1e2a6720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 46536777-88a0-4a33-90cc-d68df5842b04 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428365770598, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428365770739, "job": 1, "event": "recovery_finished"} Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025 Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240 Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000 Oct 14 03:52:45 localhost ceph-osd[32282]: freelist init Oct 14 03:52:45 localhost ceph-osd[32282]: freelist _read_cfg Oct 14 03:52:45 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 14 03:52:45 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 14 03:52:45 localhost ceph-osd[32282]: bluefs umount Oct 14 03:52:45 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) close Oct 14 03:52:45 localhost podman[32416]: Oct 14 03:52:45 localhost podman[32416]: 2025-10-14 07:52:45.832028084 +0000 UTC m=+0.072176379 container create cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, distribution-scope=public, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7) Oct 14 03:52:45 localhost systemd[1]: Started libpod-conmon-cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a.scope. Oct 14 03:52:45 localhost systemd[1]: Started libcrun container. Oct 14 03:52:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd7d92be06db62cb757475b2367e2e53fdcb1ac7d7b10ef2d7d48ce1bba63ec/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:45 localhost podman[32416]: 2025-10-14 07:52:45.803641341 +0000 UTC m=+0.043789666 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd7d92be06db62cb757475b2367e2e53fdcb1ac7d7b10ef2d7d48ce1bba63ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd7d92be06db62cb757475b2367e2e53fdcb1ac7d7b10ef2d7d48ce1bba63ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:45 localhost podman[32416]: 2025-10-14 07:52:45.926399303 +0000 UTC m=+0.166547598 container init cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7) Oct 14 03:52:45 localhost podman[32416]: 2025-10-14 07:52:45.938796909 +0000 UTC m=+0.178945214 container start cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, distribution-scope=public, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Oct 14 03:52:45 localhost podman[32416]: 2025-10-14 07:52:45.93920347 +0000 UTC m=+0.179351785 container attach cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, RELEASE=main, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, name=rhceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 03:52:46 localhost ceph-osd[31330]: osd.2 11 state: booting -> active Oct 14 03:52:46 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 14 03:52:46 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 14 03:52:46 localhost ceph-osd[32282]: bdev(0x557c1d31b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 14 03:52:46 localhost ceph-osd[32282]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 14 03:52:46 localhost ceph-osd[32282]: bluefs mount Oct 14 03:52:46 localhost ceph-osd[32282]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 14 03:52:46 localhost ceph-osd[32282]: bluefs mount shared_bdev_used = 4718592 Oct 14 03:52:46 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: RocksDB version: 7.9.2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Git sha 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: DB SUMMARY Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: DB Session ID: PMM3GBU9H2LAL53DYJOQ Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: CURRENT file: CURRENT Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: IDENTITY file: IDENTITY Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.error_if_exists: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.create_if_missing: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.env: 0x557c1d5afc00 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.fs: LegacyFileSystem Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.info_log: 0x557c1e2a6e40 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.statistics: (nil) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.use_fsync: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_log_file_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_fallocate: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.use_direct_reads: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.create_missing_column_families: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.db_log_dir: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.wal_dir: db.wal Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.advise_random_on_open: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_manager: 0x557c1d305540 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.rate_limiter: (nil) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.unordered_write: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.row_cache: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.wal_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.two_write_queues: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.manual_wal_flush: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.wal_compression: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.atomic_flush: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.log_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.db_host_id: __hostname__ Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_background_jobs: 4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_background_compactions: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_subcompactions: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_open_files: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bytes_per_sync: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_background_flushes: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Compression algorithms supported: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kZSTD supported: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kXpressCompression supported: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kZlibCompression supported: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6de0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.merge_operator: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_filter_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.sst_partitioner_factory: None Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557c1d3b6b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x557c1d2f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.write_buffer_size: 16777216 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number: 64 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression: LZ4 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression: Disabled Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.num_levels: 7 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.level: 32767 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.enabled: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.arena_block_size: 1048576 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_support: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.bloom_locality: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.max_successive_merges: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.force_consistency_checks: 1 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.ttl: 2592000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_files: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.min_blob_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_size: 268435456 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 46536777-88a0-4a33-90cc-d68df5842b04 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428366060731, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428366065530, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428366, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "46536777-88a0-4a33-90cc-d68df5842b04", "db_session_id": "PMM3GBU9H2LAL53DYJOQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428366070159, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428366, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "46536777-88a0-4a33-90cc-d68df5842b04", "db_session_id": "PMM3GBU9H2LAL53DYJOQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428366073785, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760428366, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "46536777-88a0-4a33-90cc-d68df5842b04", "db_session_id": "PMM3GBU9H2LAL53DYJOQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760428366077548, "job": 1, "event": "recovery_finished"} Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557c1d3b8380 Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: DB pointer 0x557c1e1fda00 Oct 14 03:52:46 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 14 03:52:46 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4 Oct 14 03:52:46 localhost ceph-osd[32282]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 03:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Oct 14 03:52:46 localhost ceph-osd[32282]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 14 03:52:46 localhost ceph-osd[32282]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 14 03:52:46 localhost ceph-osd[32282]: _get_class not permitted to load lua Oct 14 03:52:46 localhost ceph-osd[32282]: _get_class not permitted to load sdk Oct 14 03:52:46 localhost ceph-osd[32282]: _get_class not permitted to load test_remote_reads Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 load_pgs Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 load_pgs opened 0 pgs Oct 14 03:52:46 localhost ceph-osd[32282]: osd.4 0 log_to_monitors true Oct 14 03:52:46 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4[32278]: 2025-10-14T07:52:46.117+0000 7f20d2e52a80 -1 osd.4 0 log_to_monitors true Oct 14 03:52:46 localhost silly_mclaren[32626]: { Oct 14 03:52:46 localhost silly_mclaren[32626]: "8798be35-0a9e-4e0d-be22-4c39dcfea81e": { Oct 14 03:52:46 localhost silly_mclaren[32626]: "ceph_fsid": "fcadf6e2-9176-5818-a8d0-37b19acf8eaf", Oct 14 03:52:46 localhost silly_mclaren[32626]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Oct 14 03:52:46 localhost silly_mclaren[32626]: "osd_id": 2, Oct 14 03:52:46 localhost silly_mclaren[32626]: "osd_uuid": "8798be35-0a9e-4e0d-be22-4c39dcfea81e", Oct 14 03:52:46 localhost silly_mclaren[32626]: "type": "bluestore" Oct 14 03:52:46 localhost silly_mclaren[32626]: }, Oct 14 03:52:46 localhost silly_mclaren[32626]: "e8f18853-0710-4071-a9de-3872345d6a39": { Oct 14 03:52:46 localhost silly_mclaren[32626]: "ceph_fsid": "fcadf6e2-9176-5818-a8d0-37b19acf8eaf", Oct 14 03:52:46 localhost silly_mclaren[32626]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Oct 14 03:52:46 localhost silly_mclaren[32626]: "osd_id": 4, Oct 14 03:52:46 localhost silly_mclaren[32626]: "osd_uuid": "e8f18853-0710-4071-a9de-3872345d6a39", Oct 14 03:52:46 localhost silly_mclaren[32626]: "type": "bluestore" Oct 14 03:52:46 localhost silly_mclaren[32626]: } Oct 14 03:52:46 localhost silly_mclaren[32626]: } Oct 14 03:52:46 localhost systemd[1]: libpod-cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a.scope: Deactivated successfully. Oct 14 03:52:46 localhost podman[32416]: 2025-10-14 07:52:46.509538248 +0000 UTC m=+0.749686543 container died cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 14 03:52:46 localhost systemd[1]: var-lib-containers-storage-overlay-ddd7d92be06db62cb757475b2367e2e53fdcb1ac7d7b10ef2d7d48ce1bba63ec-merged.mount: Deactivated successfully. Oct 14 03:52:46 localhost podman[32878]: 2025-10-14 07:52:46.600960424 +0000 UTC m=+0.079577016 container remove cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_mclaren, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Oct 14 03:52:46 localhost systemd[1]: libpod-conmon-cdecfe4127678a52d698368c4dc30cb26775a82bcac484059a86b36982b2f22a.scope: Deactivated successfully. Oct 14 03:52:47 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 14 03:52:47 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 done with init, starting boot process Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 start_boot Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 14 03:52:48 localhost ceph-osd[32282]: osd.4 0 bench count 12288000 bsize 4 KiB Oct 14 03:52:48 localhost ceph-osd[31330]: osd.2 13 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 14 03:52:48 localhost ceph-osd[31330]: osd.2 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Oct 14 03:52:48 localhost ceph-osd[31330]: osd.2 13 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 14 03:52:48 localhost podman[33003]: 2025-10-14 07:52:48.116096798 +0000 UTC m=+0.093245609 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , version=7) Oct 14 03:52:48 localhost podman[33003]: 2025-10-14 07:52:48.242239025 +0000 UTC m=+0.219387816 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, release=553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 03:52:49 localhost ceph-osd[31330]: osd.2 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1,2] r=1 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 03:52:50 localhost podman[33198]: Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.119854263 +0000 UTC m=+0.081242283 container create b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Oct 14 03:52:50 localhost systemd[1]: Started libpod-conmon-b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c.scope. Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.085206853 +0000 UTC m=+0.046594903 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:50 localhost systemd[1]: Started libcrun container. Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.209958242 +0000 UTC m=+0.171346292 container init b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, version=7, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12) Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.220000333 +0000 UTC m=+0.181388373 container start b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.220447306 +0000 UTC m=+0.181835356 container attach b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Oct 14 03:52:50 localhost charming_franklin[33213]: 167 167 Oct 14 03:52:50 localhost systemd[1]: libpod-b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c.scope: Deactivated successfully. Oct 14 03:52:50 localhost podman[33198]: 2025-10-14 07:52:50.225138986 +0000 UTC m=+0.186527036 container died b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 03:52:50 localhost ceph-osd[31330]: osd.2 pg_epoch: 15 pg[1.0( v 14'64 (0'0,14'64] local-lis/les=13/14 n=2 ec=13/13 lis/c=13/0 les/c/f=14/0/0 sis=15 pruub=14.885873795s) [1,2,3] r=1 lpr=15 pi=[13,15)/1 luod=0'0 lua=0'0 crt=14'64 lcod 14'63 mlcod 0'0 active pruub 23.632066727s@ mbc={}] start_peering_interval up [1,2] -> [1,2,3], acting [1,2] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 03:52:50 localhost ceph-osd[31330]: osd.2 pg_epoch: 15 pg[1.0( v 14'64 (0'0,14'64] local-lis/les=13/14 n=2 ec=13/13 lis/c=13/0 les/c/f=14/0/0 sis=15 pruub=14.885756493s) [1,2,3] r=1 lpr=15 pi=[13,15)/1 crt=14'64 lcod 14'63 mlcod 0'0 unknown NOTIFY pruub 23.632066727s@ mbc={}] state: transitioning to Stray Oct 14 03:52:50 localhost podman[33218]: 2025-10-14 07:52:50.329453134 +0000 UTC m=+0.087681574 container remove b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_franklin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True) Oct 14 03:52:50 localhost systemd[1]: libpod-conmon-b400fef531ee5a96ca49fb1d62064b0e7df675c846f6c2f198987a65841dbd5c.scope: Deactivated successfully. Oct 14 03:52:50 localhost podman[33237]: Oct 14 03:52:50 localhost podman[33237]: 2025-10-14 07:52:50.527983244 +0000 UTC m=+0.066665135 container create d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public) Oct 14 03:52:50 localhost systemd[1]: Started libpod-conmon-d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78.scope. Oct 14 03:52:50 localhost systemd[1]: Started libcrun container. Oct 14 03:52:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18804a5def84d9a21a019aeb389ba97eeff944ef9e6ba59c85648c297e92fa5f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18804a5def84d9a21a019aeb389ba97eeff944ef9e6ba59c85648c297e92fa5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:50 localhost podman[33237]: 2025-10-14 07:52:50.500077734 +0000 UTC m=+0.038759645 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 03:52:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18804a5def84d9a21a019aeb389ba97eeff944ef9e6ba59c85648c297e92fa5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 03:52:50 localhost podman[33237]: 2025-10-14 07:52:50.61222555 +0000 UTC m=+0.150907471 container init d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64) Oct 14 03:52:50 localhost podman[33237]: 2025-10-14 07:52:50.624115732 +0000 UTC m=+0.162797613 container start d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553) Oct 14 03:52:50 localhost podman[33237]: 2025-10-14 07:52:50.624600776 +0000 UTC m=+0.163282747 container attach d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.552 iops: 8077.407 elapsed_sec: 0.371 Oct 14 03:52:50 localhost ceph-osd[32282]: log_channel(cluster) log [WRN] : OSD bench result of 8077.407302 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 0 waiting for initial osdmap Oct 14 03:52:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4[32278]: 2025-10-14T07:52:50.725+0000 7f20cedd1640 -1 osd.4 0 waiting for initial osdmap Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 check_osdmap_features require_osd_release unknown -> reef Oct 14 03:52:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-osd-4[32278]: 2025-10-14T07:52:50.743+0000 7f20ca3fb640 -1 osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 set_numa_affinity not setting numa affinity Oct 14 03:52:50 localhost ceph-osd[32282]: osd.4 15 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Oct 14 03:52:51 localhost systemd[1]: var-lib-containers-storage-overlay-0affe668c05d35aee4f81affdcfe99a3816671eb74aecf0620cd8fdd098a924d-merged.mount: Deactivated successfully. Oct 14 03:52:51 localhost thirsty_rubin[33252]: [ Oct 14 03:52:51 localhost thirsty_rubin[33252]: { Oct 14 03:52:51 localhost thirsty_rubin[33252]: "available": false, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "ceph_device": false, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "lsm_data": {}, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "lvs": [], Oct 14 03:52:51 localhost thirsty_rubin[33252]: "path": "/dev/sr0", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "rejected_reasons": [ Oct 14 03:52:51 localhost thirsty_rubin[33252]: "Insufficient space (<5GB)", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "Has a FileSystem" Oct 14 03:52:51 localhost thirsty_rubin[33252]: ], Oct 14 03:52:51 localhost thirsty_rubin[33252]: "sys_api": { Oct 14 03:52:51 localhost thirsty_rubin[33252]: "actuators": null, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "device_nodes": "sr0", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "human_readable_size": "482.00 KB", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "id_bus": "ata", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "model": "QEMU DVD-ROM", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "nr_requests": "2", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "partitions": {}, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "path": "/dev/sr0", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "removable": "1", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "rev": "2.5+", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "ro": "0", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "rotational": "1", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "sas_address": "", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "sas_device_handle": "", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "scheduler_mode": "mq-deadline", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "sectors": 0, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "sectorsize": "2048", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "size": 493568.0, Oct 14 03:52:51 localhost thirsty_rubin[33252]: "support_discard": "0", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "type": "disk", Oct 14 03:52:51 localhost thirsty_rubin[33252]: "vendor": "QEMU" Oct 14 03:52:51 localhost thirsty_rubin[33252]: } Oct 14 03:52:51 localhost thirsty_rubin[33252]: } Oct 14 03:52:51 localhost thirsty_rubin[33252]: ] Oct 14 03:52:51 localhost systemd[1]: libpod-d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78.scope: Deactivated successfully. Oct 14 03:52:51 localhost podman[33237]: 2025-10-14 07:52:51.453617385 +0000 UTC m=+0.992299306 container died d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7) Oct 14 03:52:51 localhost ceph-osd[32282]: osd.4 16 state: booting -> active Oct 14 03:52:51 localhost systemd[1]: tmp-crun.NwLGmA.mount: Deactivated successfully. Oct 14 03:52:51 localhost systemd[1]: var-lib-containers-storage-overlay-18804a5def84d9a21a019aeb389ba97eeff944ef9e6ba59c85648c297e92fa5f-merged.mount: Deactivated successfully. Oct 14 03:52:51 localhost podman[34472]: 2025-10-14 07:52:51.547965753 +0000 UTC m=+0.082152388 container remove d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_rubin, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True) Oct 14 03:52:51 localhost systemd[1]: libpod-conmon-d0e765001e1d8374125a2656538a18760abfaa679a7791c30fce687b03227a78.scope: Deactivated successfully. Oct 14 03:53:00 localhost systemd[26094]: Starting Mark boot as successful... Oct 14 03:53:00 localhost systemd[26094]: Finished Mark boot as successful. Oct 14 03:53:00 localhost podman[34595]: 2025-10-14 07:53:00.135859853 +0000 UTC m=+0.092465176 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, build-date=2025-09-24T08:57:55, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64) Oct 14 03:53:00 localhost podman[34595]: 2025-10-14 07:53:00.267292748 +0000 UTC m=+0.223897971 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True) Oct 14 03:54:02 localhost podman[34776]: 2025-10-14 07:54:02.119020241 +0000 UTC m=+0.084479501 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 14 03:54:02 localhost podman[34776]: 2025-10-14 07:54:02.24286818 +0000 UTC m=+0.208327400 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, release=553, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 03:54:15 localhost systemd[1]: session-13.scope: Deactivated successfully. Oct 14 03:54:15 localhost systemd[1]: session-13.scope: Consumed 22.186s CPU time. Oct 14 03:54:15 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Oct 14 03:54:15 localhost systemd-logind[760]: Removed session 13. Oct 14 03:56:22 localhost systemd[26094]: Created slice User Background Tasks Slice. Oct 14 03:56:22 localhost systemd[26094]: Starting Cleanup of User's Temporary Files and Directories... Oct 14 03:56:22 localhost systemd[26094]: Finished Cleanup of User's Temporary Files and Directories. Oct 14 03:57:27 localhost sshd[35153]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:57:27 localhost sshd[35154]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:57:40 localhost sshd[35155]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:57:40 localhost systemd-logind[760]: New session 27 of user zuul. Oct 14 03:57:40 localhost systemd[1]: Started Session 27 of User zuul. Oct 14 03:57:40 localhost python3[35203]: ansible-ansible.legacy.ping Invoked with data=pong Oct 14 03:57:41 localhost python3[35248]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 03:57:42 localhost python3[35268]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 14 03:57:42 localhost python3[35324]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 03:57:43 localhost python3[35367]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760428662.3767502-66571-73399232771843/source _original_basename=tmpt1xltoua follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:57:43 localhost python3[35397]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:57:43 localhost python3[35413]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:57:44 localhost python3[35429]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:57:45 localhost python3[35445]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUv/ZB171sShkvmUwM4/A+38mOKHSoVqmUnoFRrcde+TmaD2jOKfnaBsMdk2YTdAdiPwM8PX7LYcOftZjXZ92Uqg/gQ0pshmFBVtIcoN0HEQlFtMQltRrBVPG+qHK5UOF2bUImKqqFx3uTPSmteSVgJtwvFqp/51YTUibYgQBWJPCcOSze95nxendWi6PoXzvorqCyVS44Llj4LmLChBJeqAI5cWs2EeDhQ4Tw8F33iKpBg8WjZAbQVbe2KIQYURMtANtjUJ0Yg5RTArSq57504iqodB4+ynahul8Dp5+TocLZTPu5orcqRGqWDe7CN5pc1eXZQuNNZ0jW59y52GY+ox+WCmp1qvB7TQzhc/r+kAVmT8VNTVUvC5TBGcIw3yxI7lzrd03zpenSL3oyJnFN4SXCeAA8YcXlz7ySaO9YAtbCSdkgj8QJCiykvalRm17F4d4aRX5+rtfEm+WG670vF6FRNNo5OTXTK2Ja84pej1bjzDBvEz81D1EqnHybfJ0= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:57:45 localhost python3[35459]: ansible-ping Invoked with data=pong Oct 14 03:57:56 localhost sshd[35461]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:57:56 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Oct 14 03:57:56 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 14 03:57:56 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 14 03:57:56 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 14 03:57:56 localhost systemd[1]: Starting User Manager for UID 1003... Oct 14 03:57:56 localhost systemd[35465]: Queued start job for default target Main User Target. Oct 14 03:57:56 localhost systemd[35465]: Created slice User Application Slice. Oct 14 03:57:56 localhost systemd[35465]: Started Mark boot as successful after the user session has run 2 minutes. Oct 14 03:57:56 localhost systemd[35465]: Started Daily Cleanup of User's Temporary Directories. Oct 14 03:57:56 localhost systemd[35465]: Reached target Paths. Oct 14 03:57:56 localhost systemd[35465]: Reached target Timers. Oct 14 03:57:56 localhost systemd[35465]: Starting D-Bus User Message Bus Socket... Oct 14 03:57:56 localhost systemd[35465]: Starting Create User's Volatile Files and Directories... Oct 14 03:57:56 localhost systemd[35465]: Listening on D-Bus User Message Bus Socket. Oct 14 03:57:56 localhost systemd[35465]: Reached target Sockets. Oct 14 03:57:56 localhost systemd[35465]: Finished Create User's Volatile Files and Directories. Oct 14 03:57:56 localhost systemd[35465]: Reached target Basic System. Oct 14 03:57:56 localhost systemd[35465]: Reached target Main User Target. Oct 14 03:57:56 localhost systemd[35465]: Startup finished in 127ms. Oct 14 03:57:56 localhost systemd[1]: Started User Manager for UID 1003. Oct 14 03:57:56 localhost systemd[1]: Started Session 28 of User tripleo-admin. Oct 14 03:57:57 localhost python3[35527]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 14 03:58:02 localhost python3[35547]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Oct 14 03:58:03 localhost python3[35563]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 14 03:58:03 localhost python3[35611]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.vpphmtd1tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:58:04 localhost python3[35641]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.vpphmtd1tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:58:05 localhost python3[35657]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.vpphmtd1tmphosts insertbefore=BOF block=172.17.0.106 np0005486731.localdomain np0005486731#012172.18.0.106 np0005486731.storage.localdomain np0005486731.storage#012172.20.0.106 np0005486731.storagemgmt.localdomain np0005486731.storagemgmt#012172.17.0.106 np0005486731.internalapi.localdomain np0005486731.internalapi#012172.19.0.106 np0005486731.tenant.localdomain np0005486731.tenant#012192.168.122.106 np0005486731.ctlplane.localdomain np0005486731.ctlplane#012172.17.0.107 np0005486732.localdomain np0005486732#012172.18.0.107 np0005486732.storage.localdomain np0005486732.storage#012172.20.0.107 np0005486732.storagemgmt.localdomain np0005486732.storagemgmt#012172.17.0.107 np0005486732.internalapi.localdomain np0005486732.internalapi#012172.19.0.107 np0005486732.tenant.localdomain np0005486732.tenant#012192.168.122.107 np0005486732.ctlplane.localdomain np0005486732.ctlplane#012172.17.0.108 np0005486733.localdomain np0005486733#012172.18.0.108 np0005486733.storage.localdomain np0005486733.storage#012172.20.0.108 np0005486733.storagemgmt.localdomain np0005486733.storagemgmt#012172.17.0.108 np0005486733.internalapi.localdomain np0005486733.internalapi#012172.19.0.108 np0005486733.tenant.localdomain np0005486733.tenant#012192.168.122.108 np0005486733.ctlplane.localdomain np0005486733.ctlplane#012172.17.0.103 np0005486728.localdomain np0005486728#012172.18.0.103 np0005486728.storage.localdomain np0005486728.storage#012172.20.0.103 np0005486728.storagemgmt.localdomain np0005486728.storagemgmt#012172.17.0.103 np0005486728.internalapi.localdomain np0005486728.internalapi#012172.19.0.103 np0005486728.tenant.localdomain np0005486728.tenant#012192.168.122.103 np0005486728.ctlplane.localdomain np0005486728.ctlplane#012172.17.0.104 np0005486729.localdomain np0005486729#012172.18.0.104 np0005486729.storage.localdomain np0005486729.storage#012172.20.0.104 np0005486729.storagemgmt.localdomain np0005486729.storagemgmt#012172.17.0.104 np0005486729.internalapi.localdomain np0005486729.internalapi#012172.19.0.104 np0005486729.tenant.localdomain np0005486729.tenant#012192.168.122.104 np0005486729.ctlplane.localdomain np0005486729.ctlplane#012172.17.0.105 np0005486730.localdomain np0005486730#012172.18.0.105 np0005486730.storage.localdomain np0005486730.storage#012172.20.0.105 np0005486730.storagemgmt.localdomain np0005486730.storagemgmt#012172.17.0.105 np0005486730.internalapi.localdomain np0005486730.internalapi#012172.19.0.105 np0005486730.tenant.localdomain np0005486730.tenant#012192.168.122.105 np0005486730.ctlplane.localdomain np0005486730.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.210 overcloud.storage.localdomain#012172.20.0.247 overcloud.storagemgmt.localdomain#012172.17.0.162 overcloud.internalapi.localdomain#012172.21.0.142 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:58:05 localhost python3[35673]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.vpphmtd1tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:58:06 localhost python3[35690]: ansible-file Invoked with path=/tmp/ansible.vpphmtd1tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:58:06 localhost python3[35706]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:58:07 localhost python3[35723]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:58:11 localhost sshd[35805]: main: sshd: ssh-rsa algorithm is disabled Oct 14 03:58:12 localhost python3[35822]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:58:12 localhost python3[35839]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:59:21 localhost kernel: SELinux: Converting 2700 SID table entries... Oct 14 03:59:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 03:59:21 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 03:59:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 03:59:21 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 03:59:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 03:59:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 03:59:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 03:59:22 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Oct 14 03:59:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:59:22 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:59:22 localhost systemd[1]: Reloading. Oct 14 03:59:22 localhost systemd-rc-local-generator[37108]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:59:22 localhost systemd-sysv-generator[37111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:59:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:59:22 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 03:59:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:59:23 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:59:23 localhost systemd[1]: run-ra5eeb59c92bd42998d3beb0576466f3c.service: Deactivated successfully. Oct 14 03:59:24 localhost python3[37566]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:25 localhost python3[37705]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:59:25 localhost systemd[1]: Reloading. Oct 14 03:59:25 localhost systemd-sysv-generator[37738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:59:25 localhost systemd-rc-local-generator[37733]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:59:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:59:27 localhost python3[37760]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:27 localhost python3[37776]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:28 localhost python3[37793]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 03:59:30 localhost python3[37811]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:30 localhost python3[37829]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:31 localhost python3[37847]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 03:59:31 localhost systemd[1]: Reloading Network Manager... Oct 14 03:59:31 localhost NetworkManager[5972]: [1760428771.3240] audit: op="reload" arg="0" pid=37850 uid=0 result="success" Oct 14 03:59:31 localhost NetworkManager[5972]: [1760428771.3248] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Oct 14 03:59:31 localhost NetworkManager[5972]: [1760428771.3249] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Oct 14 03:59:31 localhost systemd[1]: Reloaded Network Manager. Oct 14 03:59:31 localhost python3[37866]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:32 localhost python3[37883]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:59:32 localhost python3[37901]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:59:33 localhost python3[37917]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:33 localhost python3[37933]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 14 03:59:34 localhost python3[37949]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:59:35 localhost python3[37965]: ansible-blockinfile Invoked with path=/tmp/ansible.073al_4t block=[192.168.122.106]*,[np0005486731.ctlplane.localdomain]*,[172.17.0.106]*,[np0005486731.internalapi.localdomain]*,[172.18.0.106]*,[np0005486731.storage.localdomain]*,[172.20.0.106]*,[np0005486731.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005486731.tenant.localdomain]*,[np0005486731.localdomain]*,[np0005486731]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCirnE0NbUtG1POhhB+AhKCgxEghhJb/WUMq5UfTpoI7+sU48jNxRyEvlJ9WLGLD82QYzFzvYceQHGF3QzqwIybk7JFKNvYYEOkz9hG//Xjh6A/3qZ0QptW0dWlBpSs0CuOATe19vBa98AfD1qNMYOAwwjlRDvjVW17VALcKjVesDK4LNkVfCSX9cK7Gdd1LfEkwQwxiTTZeSd91DSx5XIm3hz9RcMpxpCgc3snA81FXTTb4G1v39rycXuWjjlp/2B4CRlgPrIb6u1X/hkN0uxSMiwMQG7fZladvZi8RTRyt2EmTR0l8f0eDeuN1gLfOFVlQSfj33xH8/2G2s4IUhbudf732i4GKxgy5WBMiH2DVHzoO7LGdKlYKRvxgNG8qx68hOAzHokMnmaHnKlTsXNPph6MD/ufoeHaEG35xMkewSoY70MzDny/Z9lllfTTs+Yi5YEO22s5EoS6KK9C1+WShW9TELIuj5X8P1VeD+LlKJIwbLQzEHLc1irbnJ2RgUc=#012[192.168.122.107]*,[np0005486732.ctlplane.localdomain]*,[172.17.0.107]*,[np0005486732.internalapi.localdomain]*,[172.18.0.107]*,[np0005486732.storage.localdomain]*,[172.20.0.107]*,[np0005486732.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005486732.tenant.localdomain]*,[np0005486732.localdomain]*,[np0005486732]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM+kpIg8Y4xlC9n9pfBoVDeeU3WOfZT4Yf4ib8bb9MSMyOwJpLVbkpe3nLg73heYlLISwD3ojybTo9jDmNS7Pq+q5bGue4oqLk7f5B7IMwrmkfzjKYQpGMLL7FdErlDs6IP2jQ82E+uJ7M54Kv5g0rr+blVacsnYetzjJM26r3UcKTdOjJyIHuvQWa4IzNJRydr8s9//7Orf7269xlmVoqyAkcrhzcewCVeaK7VOrIcy3oKzOtwYpQmSxUumuX5rxE8KoCn4Ag0V3Mpp7hqN2xrry1hJN1J7yXSYaF1pc4MJKvCK6k0VqK4dY6CppsQvx2HW1s/Ib5UxJ/+JypjsqwYcSL7BSesfCtHtY8Tn1bbI+nm+nbMw1VIECq94FvZldDnxbaCQDP7dkFxqJaZebSFX+XAsRqJq4M8/rAm2gFUtCisiggasuEgfBfODBwb5+EYGNBCS/72Xs3b1h+hoMh0XCocdkTpzbr40FK6djLBdZXBAt7/Vwy0fTpC9G8H+s=#012[192.168.122.108]*,[np0005486733.ctlplane.localdomain]*,[172.17.0.108]*,[np0005486733.internalapi.localdomain]*,[172.18.0.108]*,[np0005486733.storage.localdomain]*,[172.20.0.108]*,[np0005486733.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005486733.tenant.localdomain]*,[np0005486733.localdomain]*,[np0005486733]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPo0GfacWT5Pc+C+u+omIcLodqLCmBuNDNfCjeb037QgP4jmD3LwkBVK9lXeF6bKJmM0PzOPagPFh4T7FwHNF7Np+V7e+YWSARFeetHnxYmMZdWYyfKTaZrS25xRraxyGrunWniIhAKFUaTz7e6OjUqNe25eVURCgpvQnsWeDwm/Gk9GfpfMCIFRtF7phpUKzSaz/8IpyLG1IzRSMsUkEtoKFxbAkuuJrkD4IWeWvEqn02yWC2WFGEdpQu8kcnxIshwqf9bEa7rYrjDTR++5AuztTSbppQL+8RIclxDR3uCVxzprf9Pj2C0e2X7TVKUs1tlduvrPK7uS10NGx3CK5iUe+uX+4V+jNrpe35OBv2vzdbzR+W6ciNtdy2lWLTou66Fm+/a3XwfJQb66dWQrLIyc6T64D8BysHjA8ER5TZ7N8AZoFZ8tNRzPgNWFZhjzoXdYisTvN9CjcpLgVpzekjeQS4BNNzh7bs+FPdB49TSf65NLzBIhWNqHT8weDoO58=#012[192.168.122.103]*,[np0005486728.ctlplane.localdomain]*,[172.17.0.103]*,[np0005486728.internalapi.localdomain]*,[172.18.0.103]*,[np0005486728.storage.localdomain]*,[172.20.0.103]*,[np0005486728.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005486728.tenant.localdomain]*,[np0005486728.localdomain]*,[np0005486728]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDr2nlXCVxp/8oDgdtx78rfaKpbpZ2BVPZ6HGLZUj0EA3A0bpv/vCkjK3KQT3TI7v1XfpgRbj08G0BbDhcTce9c8drn6X7lMpxvdMYZKKMTHnRs3mq9RsfEuWH3Q8Aa22LiA7rLwzVM2bbdbUcx/55pt3si8ariZ274Pzbprq7RrthEdE9xo5SDFIi+VJNQfQa+igaLblAAoG8qz+WChOAEmghfOAe4F7vBmidVxT92aYUE03zpWtqox4fE1U2dC0FMJ6Jro1ONj8KKCyEL+oLEbWFbPR4ynCyRvGaMIYh+9scB5yCf7vgPXNqu8sG+gR9i5wG43Nnh+76+XX/k+4Vyw/VeNANTjdiGvBcWmj1LLMDetoxZ5AdfklGaQq5qmrIvGqvIAGd7NgdwwWWw2umuIru3mi/5Z0H5I1uhLgTdknibTJSkhkkt/sBiBuyAXM3/HneFzlxDlYgA1xwdZeNnfiH010AO2W8pkWmWsYdMOEOBsM3SmGWtUuGKApwHcs8=#012[192.168.122.104]*,[np0005486729.ctlplane.localdomain]*,[172.17.0.104]*,[np0005486729.internalapi.localdomain]*,[172.18.0.104]*,[np0005486729.storage.localdomain]*,[172.20.0.104]*,[np0005486729.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005486729.tenant.localdomain]*,[np0005486729.localdomain]*,[np0005486729]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuTpRqp6mqKsQmynNLG8q8Bb4GSKNLRdYVfi81dV1W3aIPFsswo/C9+5nbZA1YVPY02cdXFps4EmIQl2tQ0sKmdo4HGexnhUJjKuyXFTu0kCYUasXCE5+sSjRVUCF4RfD3+6jQ9w6hHM1R3JkkhPZtKs4ykqH+8Gr2B918BdDuVaujfMmVWMv8M46JDuDO9vGPlWpM+xZkFZ1zjG2I2UIvWLkEnVdta7QIgxIPTlX7rOokadGrkAcIYb87wONg2vJiTPWO4ht4yHUIvTGNHSTmCXK0sdQLiZzjR2P/k67s1KMeWjaWAe3NXygnpvgENx9Qf9NkOYhvz8j+xZXat4Pa/I38V79XAjE3nWEF/KM6a4nKK9Lz5GXOvsQ+LIXBBY6HSAqBY4Lc21xwCJxEoO5Iftn56HzDFA+iyex5FMeT12ANKmVF9D+NHdaiZ3d5iPW6cOPqph1UjWsofejhEt0dxmCbippl74SWTZey9dQ3TKM9BGf2QfH1GvasiC+CsVU=#012[192.168.122.105]*,[np0005486730.ctlplane.localdomain]*,[172.17.0.105]*,[np0005486730.internalapi.localdomain]*,[172.18.0.105]*,[np0005486730.storage.localdomain]*,[172.20.0.105]*,[np0005486730.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005486730.tenant.localdomain]*,[np0005486730.localdomain]*,[np0005486730]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtk5xAqdm3oDp772fF0Tcpwt7lZCIcJjfcDVjKALPT5gaSA/ogGG08ba03OQjSa4fktVIIeYQdRVzWIscOCoWMDa+vnXRStoi9DI+3rLz3nQvH190s8hPq6KxWR8DzGiqF8GwF1Kfuc7wz4c9jdElv6iWUfZuxCSLQfPSRYOw9IIII6knfTuRjQAIdmUJwnjN9K5n2n8rISg0VPd9kUHZR8jL+zFPsv5XkwfW/t5CEMmx6WG8w8Q6gY+yoeU4qINcRzFjKx/s6ParctRSYzJDPYEyhrgqQUesBDU4nyxRDpFilkeZI46TfqC9bG5bKTVfVy6qnAgkt4vg6buwszUTRdx6a0v68zWAwKGNAHRKS/HQ/CRe7CHYqsob7w41V4RvOtP5kz+dniINeT/K71sL3ZwcciRuGM10ayjaxBw7HOMJHi9RWrPWads3ubzTErcORb9mdWdlSomqfEGB8Ig/tKeFTipyN39TKKHLD+o6Tjnxqb3imMsE1kZWQOzHbFhE=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:35 localhost python3[37981]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.073al_4t' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:36 localhost python3[37999]: ansible-file Invoked with path=/tmp/ansible.073al_4t state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 03:59:36 localhost python3[38015]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 03:59:37 localhost python3[38031]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:37 localhost python3[38049]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:37 localhost python3[38068]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Oct 14 03:59:40 localhost python3[38205]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:41 localhost python3[38222]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 03:59:43 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 03:59:44 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 03:59:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:59:44 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:59:44 localhost systemd[1]: Reloading. Oct 14 03:59:44 localhost systemd-rc-local-generator[38300]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 03:59:44 localhost systemd-sysv-generator[38303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 03:59:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 03:59:44 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 03:59:44 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 14 03:59:44 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 14 03:59:44 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 14 03:59:44 localhost systemd[1]: tuned.service: Consumed 1.960s CPU time. Oct 14 03:59:44 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 14 03:59:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:59:44 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:59:44 localhost systemd[1]: run-rfbc19ba09448415ab95bb22b1dac1e1f.service: Deactivated successfully. Oct 14 03:59:46 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 14 03:59:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 03:59:46 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 03:59:46 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 03:59:46 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 03:59:46 localhost systemd[1]: run-r5665057696a344c3b9b5c703ac1269bc.service: Deactivated successfully. Oct 14 03:59:47 localhost python3[38658]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 03:59:47 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 14 03:59:47 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 14 03:59:47 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 14 03:59:47 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 14 03:59:49 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 14 03:59:49 localhost python3[38853]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:50 localhost python3[38870]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 14 03:59:50 localhost python3[38886]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:59:51 localhost python3[38902]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:52 localhost python3[38922]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 03:59:53 localhost python3[38939]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 03:59:56 localhost python3[38955]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:01 localhost python3[38971]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:02 localhost python3[39019]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:02 localhost python3[39064]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428801.8125107-71105-209655571632190/source _original_basename=tmpqk18j1g4 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:02 localhost python3[39094]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:03 localhost python3[39142]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:04 localhost python3[39185]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428803.3979137-71240-56863448283336/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=3ad8d0209f3c580b846ebda0d1ccff7b6a77b702 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:04 localhost python3[39247]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:04 localhost python3[39290]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428804.2936313-71296-133350344347907/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=cd6b22568046e0a42e6fe7d93359257b42ca6ee5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:05 localhost python3[39352]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:05 localhost python3[39395]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428805.1294188-71296-22727357017662/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=988045890eab4c878cbeebf6fe69706ab2c2cfec backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:06 localhost python3[39457]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:06 localhost python3[39500]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428806.0678344-71296-200034700024343/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=68b5a56a66cb10764ef3288009ad5e9b7e8faf12 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:07 localhost python3[39562]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:07 localhost python3[39605]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428807.1356215-71296-232731200201390/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:08 localhost python3[39667]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:08 localhost python3[39710]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428807.9926069-71296-1064495043646/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=ccf42041da870d981650c02999e8ffc679c1e6ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:09 localhost python3[39772]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:09 localhost python3[39815]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428808.8463657-71296-61287750189514/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:10 localhost python3[39877]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:10 localhost python3[39920]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428809.7257411-71296-226483662957426/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=8391a3a377145b325f1f0c494e2f35795c60fdac backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:10 localhost python3[39982]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:11 localhost python3[40025]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428810.6065397-71296-67387924724470/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:11 localhost python3[40124]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:12 localhost python3[40192]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428811.4561718-71296-166998140052776/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:12 localhost python3[40267]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:12 localhost python3[40312]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428812.2845135-71296-49248134349602/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=cf05eafba8ad9786cfdcc1ed23cad176222b2916 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:13 localhost python3[40342]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:00:14 localhost python3[40390]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:00:14 localhost python3[40433]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428814.2182908-72074-86769441862443/source _original_basename=tmpkq5o45zn follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:00:19 localhost python3[40463]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 04:00:20 localhost python3[40524]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:22 localhost systemd[35465]: Starting Mark boot as successful... Oct 14 04:00:22 localhost systemd[35465]: Finished Mark boot as successful. Oct 14 04:00:24 localhost python3[40542]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:29 localhost python3[40559]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:30 localhost python3[40582]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:30 localhost python3[40605]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:31 localhost python3[40628]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:00:32 localhost python3[40651]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:14 localhost python3[40746]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:14 localhost python3[40794]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:15 localhost python3[40812]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpyhrsk2g9 recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:15 localhost python3[40842]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:16 localhost python3[40905]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:16 localhost python3[40923]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:17 localhost python3[40985]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:17 localhost python3[41003]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:18 localhost python3[41065]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:18 localhost python3[41083]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:18 localhost python3[41145]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:19 localhost python3[41163]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:19 localhost python3[41225]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:19 localhost python3[41243]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:20 localhost python3[41305]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:20 localhost python3[41323]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:21 localhost python3[41385]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:21 localhost python3[41403]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:21 localhost python3[41465]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:22 localhost python3[41483]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:22 localhost python3[41545]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:22 localhost python3[41563]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:23 localhost python3[41625]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:23 localhost python3[41643]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:24 localhost python3[41705]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:24 localhost python3[41723]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:25 localhost python3[41753]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:01:25 localhost python3[41801]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:26 localhost python3[41819]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmps60d7jex recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:28 localhost python3[41849]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:01:33 localhost python3[41866]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:01:33 localhost python3[41884]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:01:34 localhost python3[41902]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:01:35 localhost systemd[1]: Reloading. Oct 14 04:01:35 localhost systemd-rc-local-generator[41931]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:01:35 localhost systemd-sysv-generator[41934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:01:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:01:35 localhost systemd[1]: Starting Netfilter Tables... Oct 14 04:01:35 localhost systemd[1]: Finished Netfilter Tables. Oct 14 04:01:36 localhost python3[41992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:36 localhost python3[42035]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428895.9778442-74922-187090379766883/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:37 localhost python3[42065]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:37 localhost python3[42083]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:38 localhost python3[42132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:38 localhost python3[42175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428897.776084-75036-231668308674816/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:39 localhost python3[42237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:39 localhost python3[42280]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428898.7481866-75101-191971197818621/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:40 localhost python3[42342]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:40 localhost python3[42385]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428899.7908115-75168-76324775465961/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:41 localhost python3[42447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:41 localhost python3[42490]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428900.6631215-75214-185013781237785/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:42 localhost python3[42552]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:42 localhost python3[42595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428901.6549604-75263-120361063337161/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:43 localhost python3[42625]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:43 localhost python3[42691]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:01:44 localhost python3[42708]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:44 localhost python3[42725]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:45 localhost python3[42744]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:45 localhost python3[42760]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:46 localhost python3[42776]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:46 localhost python3[42792]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 14 04:01:47 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Oct 14 04:01:47 localhost python3[42812]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 14 04:01:48 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 14 04:01:48 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 04:01:48 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 04:01:48 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 04:01:48 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 04:01:48 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 04:01:48 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 04:01:48 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 04:01:48 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Oct 14 04:01:48 localhost python3[42833]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 14 04:01:49 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 14 04:01:49 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 04:01:49 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 04:01:49 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 04:01:49 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 04:01:49 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 04:01:49 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 04:01:49 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 04:01:49 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Oct 14 04:01:50 localhost python3[42854]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 14 04:01:50 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 14 04:01:50 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 04:01:50 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 04:01:50 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 04:01:50 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 04:01:50 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 04:01:50 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 04:01:50 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 04:01:51 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Oct 14 04:01:51 localhost python3[42876]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:51 localhost python3[42892]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:52 localhost python3[42908]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:52 localhost python3[42924]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:01:52 localhost python3[42940]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:01:53 localhost python3[42957]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:01:57 localhost python3[42974]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:57 localhost python3[43022]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:58 localhost python3[43065]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428917.3926067-76134-191291552298322/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 04:01:58 localhost python3[43095]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 04:01:58 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 14 04:01:58 localhost systemd[1]: Stopped Load Kernel Modules. Oct 14 04:01:58 localhost systemd[1]: Stopping Load Kernel Modules... Oct 14 04:01:58 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 04:01:58 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 14 04:01:58 localhost systemd-modules-load[43098]: Inserted module 'br_netfilter' Oct 14 04:01:58 localhost kernel: Bridge firewalling registered Oct 14 04:01:58 localhost systemd-modules-load[43098]: Module 'msr' is built in Oct 14 04:01:58 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 04:01:59 localhost python3[43149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:01:59 localhost python3[43192]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428918.9796963-76248-266591570672408/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:00 localhost python3[43222]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:00 localhost python3[43239]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:00 localhost python3[43257]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:01 localhost python3[43275]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:01 localhost python3[43294]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:01 localhost python3[43311]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:02 localhost python3[43328]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:02 localhost python3[43346]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:02 localhost python3[43364]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:03 localhost python3[43382]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:03 localhost python3[43400]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:03 localhost python3[43418]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:04 localhost python3[43436]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:04 localhost python3[43454]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:04 localhost python3[43471]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:04 localhost python3[43488]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:05 localhost python3[43505]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:05 localhost python3[43522]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 14 04:02:06 localhost python3[43540]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 04:02:06 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 14 04:02:06 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 14 04:02:06 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 14 04:02:06 localhost systemd[1]: Starting Apply Kernel Variables... Oct 14 04:02:06 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 14 04:02:06 localhost systemd[1]: Finished Apply Kernel Variables. Oct 14 04:02:06 localhost python3[43560]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:06 localhost python3[43576]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:07 localhost python3[43592]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:07 localhost python3[43608]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:02:07 localhost python3[43624]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:08 localhost python3[43640]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:08 localhost python3[43656]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:08 localhost python3[43672]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:09 localhost python3[43688]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:09 localhost python3[43736]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:10 localhost python3[43779]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428929.4032536-76642-66926714042805/source _original_basename=tmpfxs6jc2e follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:10 localhost python3[43809]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:12 localhost python3[43826]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:12 localhost python3[43874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:13 localhost python3[43917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428932.4243367-76815-108377187579180/source _original_basename=tmp747m6glq follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:13 localhost python3[43947]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:13 localhost python3[43963]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:14 localhost python3[43979]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:14 localhost python3[43995]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:14 localhost python3[44011]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:15 localhost python3[44027]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:15 localhost python3[44043]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:15 localhost python3[44059]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:16 localhost python3[44105]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:16 localhost python3[44141]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Oct 14 04:02:16 localhost python3[44193]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 14 04:02:17 localhost python3[44251]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Oct 14 04:02:17 localhost python3[44267]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:18 localhost python3[44331]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:18 localhost python3[44374]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428937.8003364-77116-46143297535437/source _original_basename=tmpnv2tpwa9 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:18 localhost python3[44404]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 14 04:02:19 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Oct 14 04:02:19 localhost python3[44427]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:20 localhost python3[44443]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Oct 14 04:02:21 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Oct 14 04:02:21 localhost python3[44520]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:02:24 localhost python3[44537]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 04:02:25 localhost python3[44598]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:25 localhost python3[44614]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:26 localhost python3[44673]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:26 localhost python3[44716]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428946.0806212-77533-204686802960289/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=dbf883d0654188162e5c0adddc607275ec25d670 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:27 localhost python3[44778]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:27 localhost python3[44823]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428947.0608656-77578-126345580600051/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:28 localhost python3[44853]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:28 localhost python3[44869]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:28 localhost python3[44885]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:29 localhost python3[44901]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:29 localhost python3[44949]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:30 localhost python3[44992]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428949.6387494-77735-41897283873524/source _original_basename=tmpk15ljeao follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:30 localhost python3[45022]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:31 localhost python3[45038]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:31 localhost python3[45054]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:02:35 localhost python3[45103]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:35 localhost python3[45148]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428955.0741737-78032-245029138044139/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:36 localhost python3[45179]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:02:36 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 14 04:02:36 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 14 04:02:36 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 14 04:02:36 localhost systemd[1]: sshd.service: Consumed 2.131s CPU time, read 1.9M from disk, written 16.0K to disk. Oct 14 04:02:36 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 14 04:02:36 localhost systemd[1]: Stopping sshd-keygen.target... Oct 14 04:02:36 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 04:02:36 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 04:02:36 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 04:02:36 localhost systemd[1]: Reached target sshd-keygen.target. Oct 14 04:02:36 localhost systemd[1]: Starting OpenSSH server daemon... Oct 14 04:02:36 localhost sshd[45183]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:02:36 localhost systemd[1]: Started OpenSSH server daemon. Oct 14 04:02:36 localhost python3[45199]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:37 localhost python3[45217]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:38 localhost python3[45235]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3400 writes, 16K keys, 3400 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3400 writes, 204 syncs, 16.67 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3400 writes, 16K keys, 3400 commit groups, 1.0 writes per commit group, ingest: 15.30 MB, 0.03 MB/s#012Interval WAL: 3400 writes, 204 syncs, 16.67 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 14 04:02:41 localhost python3[45284]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:42 localhost python3[45302]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:42 localhost python3[45332]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:02:43 localhost python3[45382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:44 localhost python3[45400]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:44 localhost python3[45430]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:02:44 localhost systemd[1]: Reloading. Oct 14 04:02:44 localhost systemd-sysv-generator[45459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:02:44 localhost systemd-rc-local-generator[45455]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:02:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:02:44 localhost systemd[1]: Starting chronyd online sources service... Oct 14 04:02:44 localhost chronyc[45470]: 200 OK Oct 14 04:02:44 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 14 04:02:44 localhost systemd[1]: Finished chronyd online sources service. Oct 14 04:02:45 localhost python3[45486]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:45 localhost chronyd[25893]: System clock was stepped by -0.000098 seconds Oct 14 04:02:45 localhost python3[45503]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 14.61 MB, 0.02 MB/s#012Interval WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me Oct 14 04:02:46 localhost python3[45520]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:46 localhost chronyd[25893]: System clock was stepped by 0.000000 seconds Oct 14 04:02:46 localhost python3[45537]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:47 localhost python3[45554]: ansible-timezone Invoked with name=UTC hwclock=None Oct 14 04:02:47 localhost systemd[1]: Starting Time & Date Service... Oct 14 04:02:47 localhost systemd[1]: Started Time & Date Service. Oct 14 04:02:48 localhost python3[45574]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:48 localhost python3[45591]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:49 localhost python3[45608]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 14 04:02:49 localhost python3[45624]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:02:50 localhost python3[45640]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:50 localhost python3[45656]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:51 localhost python3[45704]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:51 localhost python3[45747]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428970.7757661-79009-125226837079191/source _original_basename=tmp3kxto_57 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:52 localhost python3[45809]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:52 localhost python3[45852]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428971.7193987-79057-119595279505770/source _original_basename=tmpknqrwbf8 follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:52 localhost python3[45882]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 04:02:53 localhost systemd[1]: Reloading. Oct 14 04:02:53 localhost systemd-sysv-generator[45910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:02:53 localhost systemd-rc-local-generator[45906]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:02:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:02:53 localhost python3[45935]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:54 localhost python3[45951]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:54 localhost python3[45968]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:02:54 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Oct 14 04:02:54 localhost python3[45985]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:02:55 localhost python3[46001]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:02:55 localhost python3[46049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:02:56 localhost python3[46092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760428975.45901-79256-193879274964434/source _original_basename=tmpya5730b9 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:03:17 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 14 04:03:22 localhost systemd[35465]: Created slice User Background Tasks Slice. Oct 14 04:03:22 localhost systemd[35465]: Starting Cleanup of User's Temporary Files and Directories... Oct 14 04:03:22 localhost systemd[35465]: Finished Cleanup of User's Temporary Files and Directories. Oct 14 04:03:22 localhost python3[46202]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:03:23 localhost python3[46218]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Oct 14 04:03:23 localhost python3[46234]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:03:23 localhost python3[46250]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:03:24 localhost python3[46266]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:03:24 localhost python3[46282]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 14 04:03:25 localhost kernel: SELinux: Converting 2707 SID table entries... Oct 14 04:03:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 04:03:25 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 04:03:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 04:03:25 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 04:03:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 04:03:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 04:03:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 04:03:25 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Oct 14 04:03:26 localhost python3[46304]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:03:28 localhost python3[46441]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Oct 14 04:03:28 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Oct 14 04:03:28 localhost python3[46457]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:03:29 localhost python3[46473]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:03:29 localhost python3[46489]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Oct 14 04:03:35 localhost python3[46537]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:03:35 localhost python3[46580]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429014.9083347-80928-37808813016976/source _original_basename=tmpwpjyaexc follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:03:36 localhost python3[46610]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:03:38 localhost python3[46733]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:03:39 localhost python3[46854]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:03:42 localhost python3[46870]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:03:43 localhost python3[46887]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:03:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 04:03:46 localhost dbus-broker-launch[18331]: Noticed file-system modification, trigger reload. Oct 14 04:03:46 localhost dbus-broker-launch[18331]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 14 04:03:46 localhost dbus-broker-launch[18331]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 14 04:03:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 04:03:47 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 04:03:47 localhost systemd[1]: Reexecuting. Oct 14 04:03:47 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 14 04:03:47 localhost systemd[1]: Detected virtualization kvm. Oct 14 04:03:47 localhost systemd[1]: Detected architecture x86-64. Oct 14 04:03:47 localhost systemd-sysv-generator[46943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:03:47 localhost systemd-rc-local-generator[46939]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:03:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:03:55 localhost kernel: SELinux: Converting 2707 SID table entries... Oct 14 04:03:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 04:03:55 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 04:03:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 04:03:55 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 04:03:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 04:03:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 04:03:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 04:03:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 04:03:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Oct 14 04:03:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 04:03:56 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 04:03:56 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 04:03:56 localhost systemd[1]: Reloading. Oct 14 04:03:56 localhost systemd-rc-local-generator[47042]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:03:56 localhost systemd-sysv-generator[47050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:03:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:03:57 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 04:03:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 04:03:57 localhost systemd[1]: Stopping Journal Service... Oct 14 04:03:57 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 14 04:03:57 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Oct 14 04:03:57 localhost systemd-journald[618]: Journal stopped Oct 14 04:03:57 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 14 04:03:57 localhost systemd[1]: Stopped Journal Service. Oct 14 04:03:57 localhost systemd[1]: systemd-journald.service: Consumed 1.711s CPU time. Oct 14 04:03:57 localhost systemd[1]: Starting Journal Service... Oct 14 04:03:57 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 14 04:03:57 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 14 04:03:57 localhost systemd[1]: systemd-udevd.service: Consumed 3.043s CPU time. Oct 14 04:03:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 14 04:03:57 localhost systemd-journald[47332]: Journal started Oct 14 04:03:57 localhost systemd-journald[47332]: Runtime Journal (/run/log/journal/8e1d5208cffec42b50976967e1d1cfd0) is 12.1M, max 314.7M, 302.6M free. Oct 14 04:03:57 localhost systemd[1]: Started Journal Service. Oct 14 04:03:57 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 14 04:03:57 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 04:03:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 04:03:57 localhost systemd-udevd[47337]: Using default interface naming scheme 'rhel-9.0'. Oct 14 04:03:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 14 04:03:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 04:03:57 localhost systemd[1]: Reloading. Oct 14 04:03:57 localhost systemd-sysv-generator[47900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:03:57 localhost systemd-rc-local-generator[47896]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:03:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:03:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 04:03:58 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 04:03:58 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 04:03:58 localhost systemd[1]: man-db-cache-update.service: Consumed 1.356s CPU time. Oct 14 04:03:58 localhost systemd[1]: run-r65bfd890cafe4c02aa897573bf2fc10b.service: Deactivated successfully. Oct 14 04:03:58 localhost systemd[1]: run-rde614c0c3349496698deb1cf80c47d36.service: Deactivated successfully. Oct 14 04:03:59 localhost python3[48390]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Oct 14 04:04:00 localhost python3[48409]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:04:01 localhost python3[48427]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:04:01 localhost python3[48427]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Oct 14 04:04:01 localhost python3[48427]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Oct 14 04:04:09 localhost podman[48438]: 2025-10-14 08:04:01.417899717 +0000 UTC m=+0.042576365 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:04:09 localhost python3[48427]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1571c200d626c35388c5864f613dd17fb1618f6192fe622da60a47fa61763c46 --format json Oct 14 04:04:09 localhost python3[48582]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:04:09 localhost python3[48582]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Oct 14 04:04:09 localhost python3[48582]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Oct 14 04:04:19 localhost podman[48596]: 2025-10-14 08:04:10.027530852 +0000 UTC m=+0.041122597 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 14 04:04:19 localhost python3[48582]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1e3eee8f9b979ec527f69dda079bc969bf9ddbe65c90f0543f3891d72e56a75e --format json Oct 14 04:04:19 localhost python3[48756]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:04:19 localhost python3[48756]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Oct 14 04:04:19 localhost python3[48756]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Oct 14 04:04:21 localhost podman[48884]: 2025-10-14 08:04:21.234773154 +0000 UTC m=+0.091410430 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, RELEASE=main, version=7, GIT_CLEAN=True) Oct 14 04:04:21 localhost podman[48884]: 2025-10-14 08:04:21.335116723 +0000 UTC m=+0.191753949 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 04:04:36 localhost podman[48768]: 2025-10-14 08:04:19.642940644 +0000 UTC m=+0.046008779 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:04:36 localhost python3[48756]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a56a2196ea2290002b5e3e60b4c440f2326e4f1173ca4d9c0a320716a756e568 --format json Oct 14 04:04:37 localhost python3[49454]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:04:37 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Oct 14 04:04:37 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Oct 14 04:04:51 localhost podman[49467]: 2025-10-14 08:04:37.455192478 +0000 UTC m=+0.041287982 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:04:51 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 89ed729ad5d881399a0bbd370b8f3c39b84e5a87c6e02b0d1f2c943d2d9cfb7a --format json Oct 14 04:04:51 localhost python3[49657]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:04:51 localhost python3[49657]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Oct 14 04:04:51 localhost python3[49657]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Oct 14 04:05:00 localhost podman[49671]: 2025-10-14 08:04:51.566842059 +0000 UTC m=+0.043223873 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 14 04:05:00 localhost python3[49657]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a5e44a6280ab7a1da1b469cc214b40ecdad1d13f0c37c24f32cb45b40cce41d6 --format json Oct 14 04:05:01 localhost python3[49828]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:01 localhost python3[49828]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Oct 14 04:05:01 localhost python3[49828]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Oct 14 04:05:06 localhost podman[49840]: 2025-10-14 08:05:01.376130496 +0000 UTC m=+0.040966924 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 14 04:05:06 localhost python3[49828]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ef4308e71ba3950618e5de99f6c775558514a06fb9f6d93ca5c54d685a1349a6 --format json Oct 14 04:05:06 localhost python3[49963]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:06 localhost python3[49963]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Oct 14 04:05:07 localhost python3[49963]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Oct 14 04:05:10 localhost podman[49977]: 2025-10-14 08:05:07.068156544 +0000 UTC m=+0.045499654 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 14 04:05:10 localhost python3[49963]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5b5e3dbf480a168d795a47e53d0695cd833f381ef10119a3de87e5946f6b53e5 --format json Oct 14 04:05:10 localhost python3[50100]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:10 localhost python3[50100]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Oct 14 04:05:10 localhost python3[50100]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Oct 14 04:05:14 localhost podman[50112]: 2025-10-14 08:05:10.819048768 +0000 UTC m=+0.047227651 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 14 04:05:14 localhost python3[50100]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 250768c493b95c1151e047902a648e6659ba35adb4c6e0af85c231937d0cc9b7 --format json Oct 14 04:05:14 localhost python3[50235]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:14 localhost python3[50235]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Oct 14 04:05:14 localhost python3[50235]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Oct 14 04:05:18 localhost podman[50247]: 2025-10-14 08:05:14.721847772 +0000 UTC m=+0.037167053 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 14 04:05:18 localhost python3[50235]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 68d3d3a77bfc9fce94ca9ce2b28076450b851f6f1e82e97fbe356ce4ab0f7849 --format json Oct 14 04:05:18 localhost python3[50371]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:18 localhost python3[50371]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Oct 14 04:05:18 localhost python3[50371]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Oct 14 04:05:23 localhost podman[50385]: 2025-10-14 08:05:18.761165173 +0000 UTC m=+0.048843835 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:05:23 localhost python3[50371]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 01fc8d861e2b923ef0bf1d5c40a269bd976b00e8a31e8c56d63f3504b82b1c76 --format json Oct 14 04:05:24 localhost python3[50519]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 14 04:05:24 localhost python3[50519]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Oct 14 04:05:24 localhost python3[50519]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Oct 14 04:05:27 localhost podman[50532]: 2025-10-14 08:05:24.387676979 +0000 UTC m=+0.045992238 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 14 04:05:27 localhost python3[50519]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7f7fcb1a516a6191c7a8cb132a460e04d50ca4381f114f08dcbfe84340e49ac0 --format json Oct 14 04:05:28 localhost python3[50733]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:05:30 localhost ansible-async_wrapper.py[50905]: Invoked with 812018426855 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429129.5282407-83879-259156575512995/AnsiballZ_command.py _ Oct 14 04:05:30 localhost ansible-async_wrapper.py[50908]: Starting module and watcher Oct 14 04:05:30 localhost ansible-async_wrapper.py[50908]: Start watching 50909 (3600) Oct 14 04:05:30 localhost ansible-async_wrapper.py[50909]: Start module (50909) Oct 14 04:05:30 localhost ansible-async_wrapper.py[50905]: Return async_wrapper task started. Oct 14 04:05:30 localhost python3[50929]: ansible-ansible.legacy.async_status Invoked with jid=812018426855.50905 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:05:34 localhost puppet-user[50913]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:34 localhost puppet-user[50913]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:34 localhost puppet-user[50913]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:34 localhost puppet-user[50913]: (file & line not available) Oct 14 04:05:34 localhost puppet-user[50913]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:34 localhost puppet-user[50913]: (file & line not available) Oct 14 04:05:34 localhost puppet-user[50913]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 14 04:05:34 localhost puppet-user[50913]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 14 04:05:34 localhost puppet-user[50913]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.18 seconds Oct 14 04:05:34 localhost puppet-user[50913]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Oct 14 04:05:34 localhost puppet-user[50913]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Oct 14 04:05:34 localhost puppet-user[50913]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Oct 14 04:05:34 localhost puppet-user[50913]: Notice: Applied catalog in 0.07 seconds Oct 14 04:05:34 localhost puppet-user[50913]: Application: Oct 14 04:05:34 localhost puppet-user[50913]: Initial environment: production Oct 14 04:05:34 localhost puppet-user[50913]: Converged environment: production Oct 14 04:05:34 localhost puppet-user[50913]: Run mode: user Oct 14 04:05:34 localhost puppet-user[50913]: Changes: Oct 14 04:05:34 localhost puppet-user[50913]: Total: 3 Oct 14 04:05:34 localhost puppet-user[50913]: Events: Oct 14 04:05:34 localhost puppet-user[50913]: Success: 3 Oct 14 04:05:34 localhost puppet-user[50913]: Total: 3 Oct 14 04:05:34 localhost puppet-user[50913]: Resources: Oct 14 04:05:34 localhost puppet-user[50913]: Changed: 3 Oct 14 04:05:34 localhost puppet-user[50913]: Out of sync: 3 Oct 14 04:05:34 localhost puppet-user[50913]: Total: 10 Oct 14 04:05:34 localhost puppet-user[50913]: Time: Oct 14 04:05:34 localhost puppet-user[50913]: Schedule: 0.00 Oct 14 04:05:34 localhost puppet-user[50913]: File: 0.00 Oct 14 04:05:34 localhost puppet-user[50913]: Exec: 0.02 Oct 14 04:05:34 localhost puppet-user[50913]: Augeas: 0.04 Oct 14 04:05:34 localhost puppet-user[50913]: Transaction evaluation: 0.07 Oct 14 04:05:34 localhost puppet-user[50913]: Catalog application: 0.07 Oct 14 04:05:34 localhost puppet-user[50913]: Config retrieval: 0.21 Oct 14 04:05:34 localhost puppet-user[50913]: Last run: 1760429134 Oct 14 04:05:34 localhost puppet-user[50913]: Filebucket: 0.00 Oct 14 04:05:34 localhost puppet-user[50913]: Total: 0.08 Oct 14 04:05:34 localhost puppet-user[50913]: Version: Oct 14 04:05:34 localhost puppet-user[50913]: Config: 1760429134 Oct 14 04:05:34 localhost puppet-user[50913]: Puppet: 7.10.0 Oct 14 04:05:34 localhost ansible-async_wrapper.py[50909]: Module complete (50909) Oct 14 04:05:35 localhost ansible-async_wrapper.py[50908]: Done in kid B. Oct 14 04:05:41 localhost python3[51056]: ansible-ansible.legacy.async_status Invoked with jid=812018426855.50905 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:05:41 localhost python3[51072]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:05:42 localhost python3[51088]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:05:42 localhost python3[51136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:05:43 localhost python3[51179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429142.3297422-84142-253339586380408/source _original_basename=tmpd71yulpz follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:05:43 localhost python3[51209]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:05:44 localhost python3[51312]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 14 04:05:45 localhost python3[51331]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 04:05:45 localhost python3[51347]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005486731 step=1 update_config_hash_only=False Oct 14 04:05:46 localhost python3[51363]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:05:46 localhost python3[51379]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:05:47 localhost python3[51395]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 04:05:48 localhost python3[51435]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:05:49 localhost podman[51611]: 2025-10-14 08:05:49.017863919 +0000 UTC m=+0.118246006 container create 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, version=17.1.9, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:04:03, release=2, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, container_name=container-puppet-collectd, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 04:05:49 localhost podman[51611]: 2025-10-14 08:05:48.933269712 +0000 UTC m=+0.033651839 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 14 04:05:49 localhost podman[51633]: 2025-10-14 08:05:49.036891007 +0000 UTC m=+0.109186305 container create 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, distribution-scope=public, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, release=1, architecture=x86_64, tcib_managed=true, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid) Oct 14 04:05:49 localhost podman[51625]: 2025-10-14 08:05:49.064847373 +0000 UTC m=+0.133813772 container create 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=container-puppet-metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Oct 14 04:05:49 localhost systemd[1]: Started libpod-conmon-891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86.scope. Oct 14 04:05:49 localhost podman[51621]: 2025-10-14 08:05:48.970207587 +0000 UTC m=+0.058677967 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:05:49 localhost podman[51633]: 2025-10-14 08:05:48.969450767 +0000 UTC m=+0.041746065 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 14 04:05:49 localhost systemd[1]: Started libpod-conmon-40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007.scope. Oct 14 04:05:49 localhost podman[51625]: 2025-10-14 08:05:48.976987859 +0000 UTC m=+0.045954278 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:05:49 localhost podman[51640]: 2025-10-14 08:05:49.091408212 +0000 UTC m=+0.148142004 container create 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, architecture=x86_64, io.buildah.version=1.33.12) Oct 14 04:05:49 localhost systemd[1]: Started libcrun container. Oct 14 04:05:49 localhost systemd[1]: Started libcrun container. Oct 14 04:05:49 localhost systemd[1]: Started libpod-conmon-64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688.scope. Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94c8ed49a708b3cf7decc1af1486bf21a75d0bfa1928c9a829c7de69159b6ccb/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8663d2c3d5618f36fce8356c62a3252481fa61416414a2be1734fcb387a75a33/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94c8ed49a708b3cf7decc1af1486bf21a75d0bfa1928c9a829c7de69159b6ccb/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost systemd[1]: Started libcrun container. Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21837a037040259e69cb40b47a6715b197d579cd205243ce8d40aaf45d9a6d8f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost systemd[1]: Started libpod-conmon-7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a.scope. Oct 14 04:05:49 localhost podman[51611]: 2025-10-14 08:05:49.117821877 +0000 UTC m=+0.218203954 container init 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, distribution-scope=public, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, container_name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Oct 14 04:05:49 localhost podman[51625]: 2025-10-14 08:05:49.123048656 +0000 UTC m=+0.192015055 container init 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1) Oct 14 04:05:49 localhost systemd[1]: Started libcrun container. Oct 14 04:05:49 localhost podman[51611]: 2025-10-14 08:05:49.129154469 +0000 UTC m=+0.229536546 container start 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_puppet_step1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container) Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost podman[51611]: 2025-10-14 08:05:49.129702984 +0000 UTC m=+0.230085131 container attach 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-type=git, container_name=container-puppet-collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-collectd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:05:49 localhost podman[51640]: 2025-10-14 08:05:49.137937913 +0000 UTC m=+0.194671705 container init 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_puppet_step1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12) Oct 14 04:05:49 localhost podman[51640]: 2025-10-14 08:05:49.142963528 +0000 UTC m=+0.199697300 container start 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, distribution-scope=public) Oct 14 04:05:49 localhost podman[51640]: 2025-10-14 08:05:49.143112602 +0000 UTC m=+0.199846404 container attach 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1) Oct 14 04:05:49 localhost podman[51640]: 2025-10-14 08:05:49.044692265 +0000 UTC m=+0.101426087 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 14 04:05:49 localhost podman[51621]: 2025-10-14 08:05:49.148012503 +0000 UTC m=+0.236482883 container create 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, container_name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.9, release=2, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container) Oct 14 04:05:49 localhost systemd[1]: Started libpod-conmon-19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c.scope. Oct 14 04:05:49 localhost systemd[1]: Started libcrun container. Oct 14 04:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:49 localhost podman[51633]: 2025-10-14 08:05:49.956867917 +0000 UTC m=+1.029163575 container init 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_id=tripleo_puppet_step1, distribution-scope=public, container_name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:05:49 localhost podman[51633]: 2025-10-14 08:05:49.979637645 +0000 UTC m=+1.051932973 container start 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:05:49 localhost podman[51633]: 2025-10-14 08:05:49.981310539 +0000 UTC m=+1.053605867 container attach 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, release=1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 14 04:05:49 localhost podman[51625]: 2025-10-14 08:05:49.987239317 +0000 UTC m=+1.056205746 container start 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, vcs-type=git, version=17.1.9, release=1, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, io.openshift.expose-services=) Oct 14 04:05:49 localhost podman[51625]: 2025-10-14 08:05:49.987989258 +0000 UTC m=+1.056955727 container attach 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:05:50 localhost podman[51621]: 2025-10-14 08:05:50.001628312 +0000 UTC m=+1.090098712 container init 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, container_name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 14 04:05:50 localhost podman[51621]: 2025-10-14 08:05:50.010656603 +0000 UTC m=+1.099127003 container start 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, build-date=2025-07-21T14:56:59, config_id=tripleo_puppet_step1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 04:05:50 localhost podman[51621]: 2025-10-14 08:05:50.010978781 +0000 UTC m=+1.099449221 container attach 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, release=2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git) Oct 14 04:05:51 localhost puppet-user[51746]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:51 localhost puppet-user[51746]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:51 localhost puppet-user[51746]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:51 localhost puppet-user[51746]: (file & line not available) Oct 14 04:05:51 localhost ovs-vsctl[51965]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 14 04:05:51 localhost puppet-user[51746]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:51 localhost puppet-user[51746]: (file & line not available) Oct 14 04:05:51 localhost podman[51501]: 2025-10-14 08:05:48.83540895 +0000 UTC m=+0.045887325 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 14 04:05:51 localhost puppet-user[51746]: Notice: Accepting previously invalid value for target type 'Integer' Oct 14 04:05:51 localhost puppet-user[51747]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:51 localhost puppet-user[51747]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:51 localhost puppet-user[51747]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:51 localhost puppet-user[51747]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51759]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:51 localhost puppet-user[51759]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:51 localhost puppet-user[51759]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:51 localhost puppet-user[51759]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51746]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.12 seconds Oct 14 04:05:51 localhost puppet-user[51747]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:51 localhost puppet-user[51747]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51759]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:51 localhost puppet-user[51759]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Oct 14 04:05:51 localhost puppet-user[51742]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:51 localhost puppet-user[51742]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:51 localhost puppet-user[51742]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:51 localhost puppet-user[51742]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}0822a70c307706fa8e599694d00332b3c7d8095ceec21b459420f2ea1fbd051e' Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Oct 14 04:05:51 localhost puppet-user[51746]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Oct 14 04:05:51 localhost puppet-user[51746]: Notice: Applied catalog in 0.03 seconds Oct 14 04:05:51 localhost puppet-user[51746]: Application: Oct 14 04:05:51 localhost puppet-user[51746]: Initial environment: production Oct 14 04:05:51 localhost puppet-user[51746]: Converged environment: production Oct 14 04:05:51 localhost puppet-user[51746]: Run mode: user Oct 14 04:05:51 localhost puppet-user[51746]: Changes: Oct 14 04:05:51 localhost puppet-user[51746]: Total: 7 Oct 14 04:05:51 localhost puppet-user[51746]: Events: Oct 14 04:05:51 localhost puppet-user[51746]: Success: 7 Oct 14 04:05:51 localhost puppet-user[51746]: Total: 7 Oct 14 04:05:51 localhost puppet-user[51746]: Resources: Oct 14 04:05:51 localhost puppet-user[51746]: Skipped: 13 Oct 14 04:05:51 localhost puppet-user[51746]: Changed: 5 Oct 14 04:05:51 localhost puppet-user[51746]: Out of sync: 5 Oct 14 04:05:51 localhost puppet-user[51746]: Total: 20 Oct 14 04:05:51 localhost puppet-user[51746]: Time: Oct 14 04:05:51 localhost puppet-user[51746]: File: 0.02 Oct 14 04:05:51 localhost puppet-user[51746]: Transaction evaluation: 0.03 Oct 14 04:05:51 localhost puppet-user[51746]: Catalog application: 0.03 Oct 14 04:05:51 localhost puppet-user[51746]: Config retrieval: 0.15 Oct 14 04:05:51 localhost puppet-user[51746]: Last run: 1760429151 Oct 14 04:05:51 localhost puppet-user[51746]: Total: 0.03 Oct 14 04:05:51 localhost puppet-user[51746]: Version: Oct 14 04:05:51 localhost puppet-user[51746]: Config: 1760429151 Oct 14 04:05:51 localhost puppet-user[51746]: Puppet: 7.10.0 Oct 14 04:05:51 localhost puppet-user[51770]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:51 localhost puppet-user[51770]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:51 localhost puppet-user[51770]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:51 localhost puppet-user[51770]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51759]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.10 seconds Oct 14 04:05:51 localhost puppet-user[51742]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:51 localhost puppet-user[51742]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51770]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:51 localhost puppet-user[51770]: (file & line not available) Oct 14 04:05:51 localhost puppet-user[51742]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.11 seconds Oct 14 04:05:51 localhost puppet-user[51759]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Oct 14 04:05:51 localhost puppet-user[51759]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Oct 14 04:05:51 localhost puppet-user[51759]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Oct 14 04:05:51 localhost podman[52191]: 2025-10-14 08:05:51.965892119 +0000 UTC m=+0.068586162 container create 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-07-21T14:49:23, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-central, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, architecture=x86_64, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, batch=17.1_20250721.1, container_name=container-puppet-ceilometer, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:05:51 localhost puppet-user[51742]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Oct 14 04:05:51 localhost puppet-user[51742]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Oct 14 04:05:52 localhost systemd[1]: Started libpod-conmon-10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd.scope. Oct 14 04:05:52 localhost puppet-user[51742]: Notice: Applied catalog in 0.07 seconds Oct 14 04:05:52 localhost puppet-user[51742]: Application: Oct 14 04:05:52 localhost puppet-user[51742]: Initial environment: production Oct 14 04:05:52 localhost puppet-user[51742]: Converged environment: production Oct 14 04:05:52 localhost puppet-user[51742]: Run mode: user Oct 14 04:05:52 localhost puppet-user[51742]: Changes: Oct 14 04:05:52 localhost puppet-user[51742]: Total: 2 Oct 14 04:05:52 localhost puppet-user[51742]: Events: Oct 14 04:05:52 localhost puppet-user[51742]: Success: 2 Oct 14 04:05:52 localhost puppet-user[51742]: Total: 2 Oct 14 04:05:52 localhost puppet-user[51742]: Resources: Oct 14 04:05:52 localhost puppet-user[51742]: Changed: 2 Oct 14 04:05:52 localhost puppet-user[51742]: Out of sync: 2 Oct 14 04:05:52 localhost puppet-user[51742]: Skipped: 7 Oct 14 04:05:52 localhost puppet-user[51742]: Total: 9 Oct 14 04:05:52 localhost puppet-user[51742]: Time: Oct 14 04:05:52 localhost puppet-user[51742]: File: 0.01 Oct 14 04:05:52 localhost puppet-user[51742]: Cron: 0.01 Oct 14 04:05:52 localhost puppet-user[51742]: Transaction evaluation: 0.06 Oct 14 04:05:52 localhost puppet-user[51742]: Catalog application: 0.07 Oct 14 04:05:52 localhost puppet-user[51742]: Config retrieval: 0.14 Oct 14 04:05:52 localhost puppet-user[51742]: Last run: 1760429152 Oct 14 04:05:52 localhost puppet-user[51742]: Total: 0.07 Oct 14 04:05:52 localhost puppet-user[51742]: Version: Oct 14 04:05:52 localhost puppet-user[51742]: Config: 1760429151 Oct 14 04:05:52 localhost puppet-user[51742]: Puppet: 7.10.0 Oct 14 04:05:52 localhost systemd[1]: Started libcrun container. Oct 14 04:05:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:52 localhost podman[52191]: 2025-10-14 08:05:52.028030477 +0000 UTC m=+0.130724580 container init 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, batch=17.1_20250721.1, build-date=2025-07-21T14:49:23, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, com.redhat.component=openstack-ceilometer-central-container, container_name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:05:52 localhost podman[52191]: 2025-10-14 08:05:51.935063847 +0000 UTC m=+0.037757920 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 14 04:05:52 localhost podman[52191]: 2025-10-14 08:05:52.040440148 +0000 UTC m=+0.143134261 container start 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:49:23, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-central-container, name=rhosp17/openstack-ceilometer-central, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, batch=17.1_20250721.1, container_name=container-puppet-ceilometer, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:05:52 localhost podman[52191]: 2025-10-14 08:05:52.041054675 +0000 UTC m=+0.143748758 container attach 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-ceilometer, com.redhat.component=openstack-ceilometer-central-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, version=17.1.9, build-date=2025-07-21T14:49:23, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-central, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central) Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Oct 14 04:05:52 localhost puppet-user[51770]: in a future release. Use nova::cinder::os_region_name instead Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Oct 14 04:05:52 localhost puppet-user[51770]: in a future release. Use nova::cinder::catalog_info instead Oct 14 04:05:52 localhost puppet-user[51747]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.37 seconds Oct 14 04:05:52 localhost systemd[1]: libpod-64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688.scope: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: libpod-64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688.scope: Consumed 2.092s CPU time. Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Oct 14 04:05:52 localhost podman[51625]: 2025-10-14 08:05:52.31256593 +0000 UTC m=+3.381532409 container died 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, config_id=tripleo_puppet_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, release=1, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Oct 14 04:05:52 localhost systemd[1]: libpod-7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a.scope: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: libpod-7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a.scope: Consumed 2.259s CPU time. Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51759]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51759]: Notice: Applied catalog in 0.50 seconds Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51759]: Application: Oct 14 04:05:52 localhost puppet-user[51759]: Initial environment: production Oct 14 04:05:52 localhost puppet-user[51759]: Converged environment: production Oct 14 04:05:52 localhost puppet-user[51759]: Run mode: user Oct 14 04:05:52 localhost puppet-user[51759]: Changes: Oct 14 04:05:52 localhost puppet-user[51759]: Total: 4 Oct 14 04:05:52 localhost puppet-user[51759]: Events: Oct 14 04:05:52 localhost puppet-user[51759]: Success: 4 Oct 14 04:05:52 localhost puppet-user[51759]: Total: 4 Oct 14 04:05:52 localhost puppet-user[51759]: Resources: Oct 14 04:05:52 localhost puppet-user[51759]: Changed: 4 Oct 14 04:05:52 localhost puppet-user[51759]: Out of sync: 4 Oct 14 04:05:52 localhost puppet-user[51759]: Skipped: 8 Oct 14 04:05:52 localhost puppet-user[51759]: Total: 13 Oct 14 04:05:52 localhost puppet-user[51759]: Time: Oct 14 04:05:52 localhost puppet-user[51759]: File: 0.00 Oct 14 04:05:52 localhost puppet-user[51759]: Exec: 0.04 Oct 14 04:05:52 localhost puppet-user[51759]: Config retrieval: 0.13 Oct 14 04:05:52 localhost puppet-user[51759]: Augeas: 0.44 Oct 14 04:05:52 localhost puppet-user[51759]: Transaction evaluation: 0.49 Oct 14 04:05:52 localhost puppet-user[51759]: Catalog application: 0.50 Oct 14 04:05:52 localhost puppet-user[51759]: Last run: 1760429152 Oct 14 04:05:52 localhost puppet-user[51759]: Total: 0.50 Oct 14 04:05:52 localhost puppet-user[51759]: Version: Oct 14 04:05:52 localhost puppet-user[51759]: Config: 1760429151 Oct 14 04:05:52 localhost puppet-user[51759]: Puppet: 7.10.0 Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Oct 14 04:05:52 localhost podman[52290]: 2025-10-14 08:05:52.424156538 +0000 UTC m=+0.099274690 container cleanup 64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=container-puppet-metrics_qdr, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 14 04:05:52 localhost systemd[1]: libpod-conmon-64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688.scope: Deactivated successfully. Oct 14 04:05:52 localhost podman[51640]: 2025-10-14 08:05:52.434095923 +0000 UTC m=+3.490829735 container died 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=container-puppet-crond, vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Oct 14 04:05:52 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Oct 14 04:05:52 localhost puppet-user[51770]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}dee3f10cb1ff461ac3f1e743a5ef3f06993398c6c829895de1dae7f242a64b39' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Oct 14 04:05:52 localhost podman[52311]: 2025-10-14 08:05:52.588156154 +0000 UTC m=+0.192559369 container cleanup 7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, architecture=x86_64, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Oct 14 04:05:52 localhost systemd[1]: libpod-conmon-7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a.scope: Deactivated successfully. Oct 14 04:05:52 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Oct 14 04:05:52 localhost puppet-user[51747]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Oct 14 04:05:52 localhost puppet-user[51747]: Notice: Applied catalog in 0.43 seconds Oct 14 04:05:52 localhost puppet-user[51747]: Application: Oct 14 04:05:52 localhost puppet-user[51747]: Initial environment: production Oct 14 04:05:52 localhost puppet-user[51747]: Converged environment: production Oct 14 04:05:52 localhost puppet-user[51747]: Run mode: user Oct 14 04:05:52 localhost puppet-user[51747]: Changes: Oct 14 04:05:52 localhost puppet-user[51747]: Total: 43 Oct 14 04:05:52 localhost puppet-user[51747]: Events: Oct 14 04:05:52 localhost puppet-user[51747]: Success: 43 Oct 14 04:05:52 localhost puppet-user[51747]: Total: 43 Oct 14 04:05:52 localhost puppet-user[51747]: Resources: Oct 14 04:05:52 localhost puppet-user[51747]: Skipped: 14 Oct 14 04:05:52 localhost puppet-user[51747]: Changed: 38 Oct 14 04:05:52 localhost puppet-user[51747]: Out of sync: 38 Oct 14 04:05:52 localhost puppet-user[51747]: Total: 82 Oct 14 04:05:52 localhost puppet-user[51747]: Time: Oct 14 04:05:52 localhost puppet-user[51747]: Concat file: 0.00 Oct 14 04:05:52 localhost puppet-user[51747]: Concat fragment: 0.00 Oct 14 04:05:52 localhost puppet-user[51747]: File: 0.25 Oct 14 04:05:52 localhost puppet-user[51747]: Transaction evaluation: 0.43 Oct 14 04:05:52 localhost puppet-user[51747]: Config retrieval: 0.43 Oct 14 04:05:52 localhost puppet-user[51747]: Catalog application: 0.43 Oct 14 04:05:52 localhost puppet-user[51747]: Last run: 1760429152 Oct 14 04:05:52 localhost puppet-user[51747]: Total: 0.43 Oct 14 04:05:52 localhost puppet-user[51747]: Version: Oct 14 04:05:52 localhost puppet-user[51747]: Config: 1760429151 Oct 14 04:05:52 localhost puppet-user[51747]: Puppet: 7.10.0 Oct 14 04:05:52 localhost systemd[1]: libpod-40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007.scope: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: libpod-40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007.scope: Consumed 2.549s CPU time. Oct 14 04:05:52 localhost podman[51633]: 2025-10-14 08:05:52.725664454 +0000 UTC m=+3.797959792 container died 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, version=17.1.9, release=1, vendor=Red Hat, Inc., distribution-scope=public, container_name=container-puppet-iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Oct 14 04:05:52 localhost podman[52423]: 2025-10-14 08:05:52.913908887 +0000 UTC m=+0.173986194 container cleanup 40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, container_name=container-puppet-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9) Oct 14 04:05:52 localhost systemd[1]: libpod-conmon-40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007.scope: Deactivated successfully. Oct 14 04:05:52 localhost podman[52453]: 2025-10-14 08:05:52.926657758 +0000 UTC m=+0.111046665 container create 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, container_name=container-puppet-rsyslog, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_puppet_step1, architecture=x86_64, build-date=2025-07-21T12:58:40, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 14 04:05:52 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 14 04:05:52 localhost podman[52453]: 2025-10-14 08:05:52.853482575 +0000 UTC m=+0.037871492 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 14 04:05:52 localhost systemd[1]: Started libpod-conmon-26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3.scope. Oct 14 04:05:52 localhost systemd[1]: Started libcrun container. Oct 14 04:05:52 localhost systemd[1]: tmp-crun.1Nivnl.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay-74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663-merged.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d9db87eedccf80252d42ceab9ffbeee32f1fc196854d7de95db12e18718c29a-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay-21837a037040259e69cb40b47a6715b197d579cd205243ce8d40aaf45d9a6d8f-merged.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64f4ae80ccd74d79260f8a55cde9ea7aa55fdfd59e111f8ac952d08620a57688-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay-94c8ed49a708b3cf7decc1af1486bf21a75d0bfa1928c9a829c7de69159b6ccb-merged.mount: Deactivated successfully. Oct 14 04:05:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40cdc2e499ceed3c5f7210c4fc8b1670895ca33c96c1f2ab28216b14b846f007-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bfc1d5359a39ca467891151850ad29ab2405c99c0e73704689224632337029/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:52 localhost podman[52453]: 2025-10-14 08:05:52.985342034 +0000 UTC m=+0.169730941 container init 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vendor=Red Hat, Inc., vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:40, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, distribution-scope=public, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 04:05:52 localhost podman[52453]: 2025-10-14 08:05:52.9953358 +0000 UTC m=+0.179724707 container start 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, distribution-scope=public, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:40, container_name=container-puppet-rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 14 04:05:52 localhost podman[52453]: 2025-10-14 08:05:52.995511025 +0000 UTC m=+0.179899952 container attach 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, version=17.1.9, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rsyslog, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, io.openshift.expose-services=, release=1) Oct 14 04:05:53 localhost systemd[1]: libpod-891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86.scope: Deactivated successfully. Oct 14 04:05:53 localhost systemd[1]: libpod-891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86.scope: Consumed 2.698s CPU time. Oct 14 04:05:53 localhost podman[51611]: 2025-10-14 08:05:53.052023952 +0000 UTC m=+4.152406049 container died 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, container_name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 04:05:53 localhost podman[52543]: 2025-10-14 08:05:53.086798381 +0000 UTC m=+0.064463641 container create 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, version=17.1.9, vcs-type=git, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:05:53 localhost systemd[1]: Started libpod-conmon-3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8.scope. Oct 14 04:05:53 localhost puppet-user[51770]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 1.27 seconds Oct 14 04:05:53 localhost systemd[1]: Started libcrun container. Oct 14 04:05:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:53 localhost podman[52543]: 2025-10-14 08:05:53.147578683 +0000 UTC m=+0.125243953 container init 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1) Oct 14 04:05:53 localhost podman[52543]: 2025-10-14 08:05:53.15347234 +0000 UTC m=+0.131137610 container start 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1) Oct 14 04:05:53 localhost podman[52543]: 2025-10-14 08:05:53.054498189 +0000 UTC m=+0.032163469 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:05:53 localhost podman[52543]: 2025-10-14 08:05:53.153668586 +0000 UTC m=+0.131333856 container attach 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, container_name=container-puppet-ovn_controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1) Oct 14 04:05:53 localhost podman[52566]: 2025-10-14 08:05:53.228632545 +0000 UTC m=+0.169147064 container cleanup 891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=container-puppet-collectd, maintainer=OpenStack TripleO Team) Oct 14 04:05:53 localhost systemd[1]: libpod-conmon-891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86.scope: Deactivated successfully. Oct 14 04:05:53 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}0fdf4bb00e72dcbd4ea68fd251936fe0a9549636ee60a27eac1ec895516e4cd2' Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Oct 14 04:05:53 localhost puppet-user[51770]: Warning: Empty environment setting 'TLS_PASSWORD' Oct 14 04:05:53 localhost puppet-user[51770]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}36f28826b5ffa6d226163e1590f039bf4106b313174c95fd46ed1b050a897488' Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Oct 14 04:05:53 localhost systemd[1]: var-lib-containers-storage-overlay-8663d2c3d5618f36fce8356c62a3252481fa61416414a2be1734fcb387a75a33-merged.mount: Deactivated successfully. Oct 14 04:05:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-891df569b1915d20a1c7482c7128e40d0c9d49faca9c9eeba762c021c1d95c86-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Oct 14 04:05:53 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:54 localhost puppet-user[52308]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:54 localhost puppet-user[52308]: (file & line not available) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:54 localhost puppet-user[52308]: (file & line not available) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Oct 14 04:05:54 localhost puppet-user[52308]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.37 seconds Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Oct 14 04:05:54 localhost puppet-user[52560]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:54 localhost puppet-user[52560]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:54 localhost puppet-user[52560]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:54 localhost puppet-user[52560]: (file & line not available) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Oct 14 04:05:54 localhost puppet-user[52560]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:54 localhost puppet-user[52560]: (file & line not available) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Oct 14 04:05:54 localhost puppet-user[52619]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:05:54 localhost puppet-user[52619]: (file: /etc/puppet/hiera.yaml) Oct 14 04:05:54 localhost puppet-user[52619]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:05:54 localhost puppet-user[52619]: (file & line not available) Oct 14 04:05:54 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Oct 14 04:05:54 localhost puppet-user[52308]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Oct 14 04:05:54 localhost puppet-user[52560]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.23 seconds Oct 14 04:05:55 localhost puppet-user[52619]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:05:55 localhost puppet-user[52619]: (file & line not available) Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Oct 14 04:05:55 localhost puppet-user[52308]: Notice: Applied catalog in 0.40 seconds Oct 14 04:05:55 localhost puppet-user[52308]: Application: Oct 14 04:05:55 localhost puppet-user[52308]: Initial environment: production Oct 14 04:05:55 localhost puppet-user[52308]: Converged environment: production Oct 14 04:05:55 localhost puppet-user[52308]: Run mode: user Oct 14 04:05:55 localhost puppet-user[52308]: Changes: Oct 14 04:05:55 localhost puppet-user[52308]: Total: 31 Oct 14 04:05:55 localhost puppet-user[52308]: Events: Oct 14 04:05:55 localhost puppet-user[52308]: Success: 31 Oct 14 04:05:55 localhost puppet-user[52308]: Total: 31 Oct 14 04:05:55 localhost puppet-user[52308]: Resources: Oct 14 04:05:55 localhost puppet-user[52308]: Skipped: 22 Oct 14 04:05:55 localhost puppet-user[52308]: Changed: 31 Oct 14 04:05:55 localhost puppet-user[52308]: Out of sync: 31 Oct 14 04:05:55 localhost puppet-user[52308]: Total: 151 Oct 14 04:05:55 localhost puppet-user[52308]: Time: Oct 14 04:05:55 localhost puppet-user[52308]: Package: 0.03 Oct 14 04:05:55 localhost puppet-user[52308]: Ceilometer config: 0.32 Oct 14 04:05:55 localhost puppet-user[52308]: Transaction evaluation: 0.40 Oct 14 04:05:55 localhost puppet-user[52308]: Catalog application: 0.40 Oct 14 04:05:55 localhost puppet-user[52308]: Config retrieval: 0.44 Oct 14 04:05:55 localhost puppet-user[52308]: Last run: 1760429155 Oct 14 04:05:55 localhost puppet-user[52308]: Resources: 0.00 Oct 14 04:05:55 localhost puppet-user[52308]: Total: 0.41 Oct 14 04:05:55 localhost puppet-user[52308]: Version: Oct 14 04:05:55 localhost puppet-user[52308]: Config: 1760429154 Oct 14 04:05:55 localhost puppet-user[52308]: Puppet: 7.10.0 Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}12e84d657e52aba69da43e57ae7a44cbb966f3f84d32b3c865ab366a8f5b2c46' Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Oct 14 04:05:55 localhost puppet-user[52560]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Oct 14 04:05:55 localhost puppet-user[52560]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Oct 14 04:05:55 localhost puppet-user[52560]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}6d89c49687616c5e59c9c002fcf48eecca9d8d0df7ca4590fce2e15608d780ce' Oct 14 04:05:55 localhost puppet-user[52560]: Notice: Applied catalog in 0.10 seconds Oct 14 04:05:55 localhost puppet-user[52560]: Application: Oct 14 04:05:55 localhost puppet-user[52560]: Initial environment: production Oct 14 04:05:55 localhost puppet-user[52560]: Converged environment: production Oct 14 04:05:55 localhost puppet-user[52560]: Run mode: user Oct 14 04:05:55 localhost puppet-user[52560]: Changes: Oct 14 04:05:55 localhost puppet-user[52560]: Total: 3 Oct 14 04:05:55 localhost puppet-user[52560]: Events: Oct 14 04:05:55 localhost puppet-user[52560]: Success: 3 Oct 14 04:05:55 localhost puppet-user[52560]: Total: 3 Oct 14 04:05:55 localhost puppet-user[52560]: Resources: Oct 14 04:05:55 localhost puppet-user[52560]: Skipped: 11 Oct 14 04:05:55 localhost puppet-user[52560]: Changed: 3 Oct 14 04:05:55 localhost puppet-user[52560]: Out of sync: 3 Oct 14 04:05:55 localhost puppet-user[52560]: Total: 25 Oct 14 04:05:55 localhost puppet-user[52560]: Time: Oct 14 04:05:55 localhost puppet-user[52560]: Concat file: 0.00 Oct 14 04:05:55 localhost puppet-user[52560]: Concat fragment: 0.00 Oct 14 04:05:55 localhost puppet-user[52560]: File: 0.01 Oct 14 04:05:55 localhost puppet-user[52560]: Transaction evaluation: 0.09 Oct 14 04:05:55 localhost puppet-user[52560]: Catalog application: 0.10 Oct 14 04:05:55 localhost puppet-user[52560]: Config retrieval: 0.28 Oct 14 04:05:55 localhost puppet-user[52560]: Last run: 1760429155 Oct 14 04:05:55 localhost puppet-user[52560]: Total: 0.10 Oct 14 04:05:55 localhost puppet-user[52560]: Version: Oct 14 04:05:55 localhost puppet-user[52560]: Config: 1760429154 Oct 14 04:05:55 localhost puppet-user[52560]: Puppet: 7.10.0 Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Oct 14 04:05:55 localhost puppet-user[52619]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.26 seconds Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52872]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52874]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52876]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.106 Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52907]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005486731.localdomain Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005486731.novalocal' to 'np0005486731.localdomain' Oct 14 04:05:55 localhost ovs-vsctl[52909]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52921]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Oct 14 04:05:55 localhost systemd[1]: libpod-26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3.scope: Deactivated successfully. Oct 14 04:05:55 localhost systemd[1]: libpod-26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3.scope: Consumed 2.268s CPU time. Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52929]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Oct 14 04:05:55 localhost systemd[1]: libpod-10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd.scope: Deactivated successfully. Oct 14 04:05:55 localhost systemd[1]: libpod-10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd.scope: Consumed 2.973s CPU time. Oct 14 04:05:55 localhost podman[52191]: 2025-10-14 08:05:55.443970103 +0000 UTC m=+3.546664206 container died 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, com.redhat.component=openstack-ceilometer-central-container, container_name=container-puppet-ceilometer, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-central, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.9, build-date=2025-07-21T14:49:23) Oct 14 04:05:55 localhost ovs-vsctl[52937]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52955]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Oct 14 04:05:55 localhost podman[52938]: 2025-10-14 08:05:55.481980138 +0000 UTC m=+0.032609302 container died 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, config_id=tripleo_puppet_step1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, architecture=x86_64, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 04:05:55 localhost ovs-vsctl[52963]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Oct 14 04:05:55 localhost systemd[1]: tmp-crun.XNc4Vy.mount: Deactivated successfully. Oct 14 04:05:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Oct 14 04:05:55 localhost systemd[1]: var-lib-containers-storage-overlay-c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7-merged.mount: Deactivated successfully. Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52966]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:50:ef:d1 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52968]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost podman[52939]: 2025-10-14 08:05:55.56902621 +0000 UTC m=+0.117121756 container cleanup 10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20250721.1, build-date=2025-07-21T14:49:23, container_name=container-puppet-ceilometer, com.redhat.component=openstack-ceilometer-central-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, name=rhosp17/openstack-ceilometer-central, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-central) Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost ovs-vsctl[52971]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Oct 14 04:05:55 localhost systemd[1]: libpod-conmon-10b9cdb6471becebf75478c50c0de8ecb43f83acfa619f91e9368c68e466a5bd.scope: Deactivated successfully. Oct 14 04:05:55 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 14 04:05:55 localhost ovs-vsctl[52974]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Oct 14 04:05:55 localhost podman[52938]: 2025-10-14 08:05:55.665278509 +0000 UTC m=+0.215907653 container cleanup 26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T12:58:40, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, name=rhosp17/openstack-rsyslog, release=1, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-rsyslog, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 14 04:05:55 localhost systemd[1]: libpod-conmon-26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3.scope: Deactivated successfully. Oct 14 04:05:55 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 14 04:05:55 localhost puppet-user[52619]: Notice: Applied catalog in 0.47 seconds Oct 14 04:05:55 localhost puppet-user[52619]: Application: Oct 14 04:05:55 localhost puppet-user[52619]: Initial environment: production Oct 14 04:05:55 localhost puppet-user[52619]: Converged environment: production Oct 14 04:05:55 localhost puppet-user[52619]: Run mode: user Oct 14 04:05:55 localhost puppet-user[52619]: Changes: Oct 14 04:05:55 localhost puppet-user[52619]: Total: 14 Oct 14 04:05:55 localhost puppet-user[52619]: Events: Oct 14 04:05:55 localhost puppet-user[52619]: Success: 14 Oct 14 04:05:55 localhost puppet-user[52619]: Total: 14 Oct 14 04:05:55 localhost puppet-user[52619]: Resources: Oct 14 04:05:55 localhost puppet-user[52619]: Skipped: 12 Oct 14 04:05:55 localhost puppet-user[52619]: Changed: 14 Oct 14 04:05:55 localhost puppet-user[52619]: Out of sync: 14 Oct 14 04:05:55 localhost puppet-user[52619]: Total: 29 Oct 14 04:05:55 localhost puppet-user[52619]: Time: Oct 14 04:05:55 localhost puppet-user[52619]: Exec: 0.01 Oct 14 04:05:55 localhost puppet-user[52619]: Config retrieval: 0.29 Oct 14 04:05:55 localhost puppet-user[52619]: Vs config: 0.41 Oct 14 04:05:55 localhost puppet-user[52619]: Transaction evaluation: 0.46 Oct 14 04:05:55 localhost puppet-user[52619]: Catalog application: 0.47 Oct 14 04:05:55 localhost puppet-user[52619]: Last run: 1760429155 Oct 14 04:05:55 localhost puppet-user[52619]: Total: 0.47 Oct 14 04:05:55 localhost puppet-user[52619]: Version: Oct 14 04:05:55 localhost puppet-user[52619]: Config: 1760429154 Oct 14 04:05:55 localhost puppet-user[52619]: Puppet: 7.10.0 Oct 14 04:05:56 localhost systemd[1]: libpod-3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8.scope: Deactivated successfully. Oct 14 04:05:56 localhost systemd[1]: libpod-3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8.scope: Consumed 2.796s CPU time. Oct 14 04:05:56 localhost podman[52543]: 2025-10-14 08:05:56.085515453 +0000 UTC m=+3.063180773 container died 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, distribution-scope=public, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, vcs-type=git, name=rhosp17/openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Oct 14 04:05:56 localhost podman[53055]: 2025-10-14 08:05:56.182003678 +0000 UTC m=+0.089811427 container cleanup 3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-ovn_controller, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1) Oct 14 04:05:56 localhost systemd[1]: libpod-conmon-3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8.scope: Deactivated successfully. Oct 14 04:05:56 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:05:56 localhost systemd[1]: var-lib-containers-storage-overlay-1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113-merged.mount: Deactivated successfully. Oct 14 04:05:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:56 localhost systemd[1]: var-lib-containers-storage-overlay-d4bfc1d5359a39ca467891151850ad29ab2405c99c0e73704689224632337029-merged.mount: Deactivated successfully. Oct 14 04:05:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3-userdata-shm.mount: Deactivated successfully. Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 14 04:05:56 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Oct 14 04:05:57 localhost puppet-user[51770]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3ccd56cc76ec60fa08fd698d282c9c89b1e8c485a00f47d57569ed8f6f8a16e4' Oct 14 04:05:57 localhost puppet-user[51770]: Notice: Applied catalog in 4.26 seconds Oct 14 04:05:57 localhost puppet-user[51770]: Application: Oct 14 04:05:57 localhost puppet-user[51770]: Initial environment: production Oct 14 04:05:57 localhost puppet-user[51770]: Converged environment: production Oct 14 04:05:57 localhost puppet-user[51770]: Run mode: user Oct 14 04:05:57 localhost puppet-user[51770]: Changes: Oct 14 04:05:57 localhost puppet-user[51770]: Total: 183 Oct 14 04:05:57 localhost puppet-user[51770]: Events: Oct 14 04:05:57 localhost puppet-user[51770]: Success: 183 Oct 14 04:05:57 localhost puppet-user[51770]: Total: 183 Oct 14 04:05:57 localhost puppet-user[51770]: Resources: Oct 14 04:05:57 localhost puppet-user[51770]: Changed: 183 Oct 14 04:05:57 localhost puppet-user[51770]: Out of sync: 183 Oct 14 04:05:57 localhost puppet-user[51770]: Skipped: 57 Oct 14 04:05:57 localhost puppet-user[51770]: Total: 487 Oct 14 04:05:57 localhost puppet-user[51770]: Time: Oct 14 04:05:57 localhost puppet-user[51770]: Concat fragment: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: Anchor: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: File line: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: Virtlogd config: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: Virtstoraged config: 0.01 Oct 14 04:05:57 localhost puppet-user[51770]: Virtsecretd config: 0.01 Oct 14 04:05:57 localhost puppet-user[51770]: Virtqemud config: 0.02 Oct 14 04:05:57 localhost puppet-user[51770]: Exec: 0.02 Oct 14 04:05:57 localhost puppet-user[51770]: Virtnodedevd config: 0.02 Oct 14 04:05:57 localhost puppet-user[51770]: File: 0.02 Oct 14 04:05:57 localhost puppet-user[51770]: Package: 0.03 Oct 14 04:05:57 localhost puppet-user[51770]: Virtproxyd config: 0.05 Oct 14 04:05:57 localhost puppet-user[51770]: Augeas: 0.94 Oct 14 04:05:57 localhost puppet-user[51770]: Config retrieval: 1.52 Oct 14 04:05:57 localhost puppet-user[51770]: Last run: 1760429157 Oct 14 04:05:57 localhost puppet-user[51770]: Nova config: 2.93 Oct 14 04:05:57 localhost puppet-user[51770]: Transaction evaluation: 4.24 Oct 14 04:05:57 localhost puppet-user[51770]: Catalog application: 4.26 Oct 14 04:05:57 localhost puppet-user[51770]: Resources: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: Concat file: 0.00 Oct 14 04:05:57 localhost puppet-user[51770]: Total: 4.26 Oct 14 04:05:57 localhost puppet-user[51770]: Version: Oct 14 04:05:57 localhost puppet-user[51770]: Config: 1760429151 Oct 14 04:05:57 localhost puppet-user[51770]: Puppet: 7.10.0 Oct 14 04:05:58 localhost podman[52624]: 2025-10-14 08:05:53.264664807 +0000 UTC m=+0.038191310 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 14 04:05:58 localhost podman[53214]: 2025-10-14 08:05:58.514154022 +0000 UTC m=+0.089071118 container create f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, version=17.1.9, release=1, container_name=container-puppet-neutron, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, build-date=2025-07-21T15:44:03, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_puppet_step1) Oct 14 04:05:58 localhost systemd[1]: Started libpod-conmon-f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b.scope. Oct 14 04:05:58 localhost podman[53214]: 2025-10-14 08:05:58.462538535 +0000 UTC m=+0.037455711 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 14 04:05:58 localhost systemd[1]: tmp-crun.W1ty8k.mount: Deactivated successfully. Oct 14 04:05:58 localhost systemd[1]: Started libcrun container. Oct 14 04:05:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 14 04:05:58 localhost podman[53214]: 2025-10-14 08:05:58.618530049 +0000 UTC m=+0.193447145 container init f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, container_name=container-puppet-neutron, build-date=2025-07-21T15:44:03, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., release=1) Oct 14 04:05:58 localhost podman[53214]: 2025-10-14 08:05:58.629776309 +0000 UTC m=+0.204693415 container start f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, container_name=container-puppet-neutron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, release=1, version=17.1.9) Oct 14 04:05:58 localhost podman[53214]: 2025-10-14 08:05:58.630008475 +0000 UTC m=+0.204925571 container attach f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, version=17.1.9, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, container_name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-neutron-server, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=) Oct 14 04:05:58 localhost systemd[1]: libpod-19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c.scope: Deactivated successfully. Oct 14 04:05:58 localhost systemd[1]: libpod-19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c.scope: Consumed 8.397s CPU time. Oct 14 04:05:58 localhost podman[51621]: 2025-10-14 08:05:58.889371385 +0000 UTC m=+9.977841865 container died 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, distribution-scope=public, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Oct 14 04:05:59 localhost podman[53846]: 2025-10-14 08:05:59.163901851 +0000 UTC m=+0.243224301 container cleanup 19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, container_name=container-puppet-nova_libvirt, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:05:59 localhost systemd[1]: libpod-conmon-19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c.scope: Deactivated successfully. Oct 14 04:05:59 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:05:59 localhost systemd[1]: var-lib-containers-storage-overlay-5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67-merged.mount: Deactivated successfully. Oct 14 04:05:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c-userdata-shm.mount: Deactivated successfully. Oct 14 04:06:00 localhost puppet-user[53864]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Oct 14 04:06:00 localhost puppet-user[53864]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:06:00 localhost puppet-user[53864]: (file: /etc/puppet/hiera.yaml) Oct 14 04:06:00 localhost puppet-user[53864]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:06:00 localhost puppet-user[53864]: (file & line not available) Oct 14 04:06:01 localhost puppet-user[53864]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:06:01 localhost puppet-user[53864]: (file & line not available) Oct 14 04:06:01 localhost puppet-user[53864]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Oct 14 04:06:01 localhost puppet-user[53864]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.60 seconds Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Oct 14 04:06:01 localhost puppet-user[53864]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Oct 14 04:06:02 localhost puppet-user[53864]: Notice: Applied catalog in 0.42 seconds Oct 14 04:06:02 localhost puppet-user[53864]: Application: Oct 14 04:06:02 localhost puppet-user[53864]: Initial environment: production Oct 14 04:06:02 localhost puppet-user[53864]: Converged environment: production Oct 14 04:06:02 localhost puppet-user[53864]: Run mode: user Oct 14 04:06:02 localhost puppet-user[53864]: Changes: Oct 14 04:06:02 localhost puppet-user[53864]: Total: 33 Oct 14 04:06:02 localhost puppet-user[53864]: Events: Oct 14 04:06:02 localhost puppet-user[53864]: Success: 33 Oct 14 04:06:02 localhost puppet-user[53864]: Total: 33 Oct 14 04:06:02 localhost puppet-user[53864]: Resources: Oct 14 04:06:02 localhost puppet-user[53864]: Skipped: 21 Oct 14 04:06:02 localhost puppet-user[53864]: Changed: 33 Oct 14 04:06:02 localhost puppet-user[53864]: Out of sync: 33 Oct 14 04:06:02 localhost puppet-user[53864]: Total: 155 Oct 14 04:06:02 localhost puppet-user[53864]: Time: Oct 14 04:06:02 localhost puppet-user[53864]: Resources: 0.00 Oct 14 04:06:02 localhost puppet-user[53864]: Ovn metadata agent config: 0.02 Oct 14 04:06:02 localhost puppet-user[53864]: Neutron config: 0.34 Oct 14 04:06:02 localhost puppet-user[53864]: Transaction evaluation: 0.41 Oct 14 04:06:02 localhost puppet-user[53864]: Catalog application: 0.42 Oct 14 04:06:02 localhost puppet-user[53864]: Config retrieval: 0.68 Oct 14 04:06:02 localhost puppet-user[53864]: Last run: 1760429162 Oct 14 04:06:02 localhost puppet-user[53864]: Total: 0.42 Oct 14 04:06:02 localhost puppet-user[53864]: Version: Oct 14 04:06:02 localhost puppet-user[53864]: Config: 1760429160 Oct 14 04:06:02 localhost puppet-user[53864]: Puppet: 7.10.0 Oct 14 04:06:02 localhost systemd[1]: libpod-f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b.scope: Deactivated successfully. Oct 14 04:06:02 localhost systemd[1]: libpod-f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b.scope: Consumed 3.437s CPU time. Oct 14 04:06:02 localhost podman[53214]: 2025-10-14 08:06:02.553497735 +0000 UTC m=+4.128414911 container died f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, config_id=tripleo_puppet_step1, release=1, container_name=container-puppet-neutron, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible) Oct 14 04:06:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b-userdata-shm.mount: Deactivated successfully. Oct 14 04:06:02 localhost systemd[1]: var-lib-containers-storage-overlay-6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9-merged.mount: Deactivated successfully. Oct 14 04:06:02 localhost podman[54002]: 2025-10-14 08:06:02.695282039 +0000 UTC m=+0.132292521 container cleanup f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, vcs-type=git, release=1, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, tcib_managed=true, build-date=2025-07-21T15:44:03, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64) Oct 14 04:06:02 localhost systemd[1]: libpod-conmon-f28630ccdc89e75dd23892f25d91c7843de0490bb3bf73632ce7dc1962a3bc3b.scope: Deactivated successfully. Oct 14 04:06:02 localhost python3[51435]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005486731 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005486731', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 14 04:06:03 localhost python3[54054]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:04 localhost python3[54086]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:06:05 localhost python3[54136]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:05 localhost python3[54179]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429164.7326553-84747-189083876191085/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:05 localhost python3[54241]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:06 localhost python3[54284]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429165.597351-84747-78446789307015/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:06 localhost python3[54346]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:07 localhost python3[54389]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429166.5515325-84873-97308472983136/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:07 localhost python3[54451]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:08 localhost python3[54494]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429167.4636095-84903-142324872842452/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:08 localhost python3[54524]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:06:08 localhost systemd[1]: Reloading. Oct 14 04:06:08 localhost systemd-rc-local-generator[54547]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:08 localhost systemd-sysv-generator[54550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:08 localhost systemd[1]: Reloading. Oct 14 04:06:09 localhost systemd-sysv-generator[54594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:09 localhost systemd-rc-local-generator[54590]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:09 localhost systemd[1]: Starting TripleO Container Shutdown... Oct 14 04:06:09 localhost systemd[1]: Finished TripleO Container Shutdown. Oct 14 04:06:09 localhost python3[54649]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:10 localhost python3[54692]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429169.4534602-84963-267247326807231/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:10 localhost python3[54754]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:06:11 localhost python3[54797]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429170.391012-84996-269510606298037/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:11 localhost python3[54827]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:06:11 localhost systemd[1]: Reloading. Oct 14 04:06:11 localhost systemd-rc-local-generator[54853]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:11 localhost systemd-sysv-generator[54858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:11 localhost systemd[1]: Reloading. Oct 14 04:06:12 localhost systemd-rc-local-generator[54892]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:12 localhost systemd-sysv-generator[54896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:12 localhost systemd[1]: Starting Create netns directory... Oct 14 04:06:12 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 04:06:12 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 04:06:12 localhost systemd[1]: Finished Create netns directory. Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: b495038a864008964602910aa3c03fe1 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: da9a0dc7b40588672419e3ce10063e21 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: bd9d045a0b37801182392caf49375c15 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 4c9706ce89053601d63131b238721a51 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 6fab081f94b3dd479fa1fef3dbed1d07 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 6fab081f94b3dd479fa1fef3dbed1d07 Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: b594b6ed5677fe328472ea80ffe520cb Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:12 localhost python3[54920]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: f5be0e0347f8a081fe8927c6f95950cc Oct 14 04:06:14 localhost python3[54978]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.590756265 +0000 UTC m=+0.092871129 container create 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9) Oct 14 04:06:14 localhost systemd[1]: Started libpod-conmon-6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7.scope. Oct 14 04:06:14 localhost systemd[1]: Started libcrun container. Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.55005397 +0000 UTC m=+0.052168874 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:06:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5b531ba48535632b40540aa07cee707004fde63b53fdfb79d721331dbc1eb8/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.661321928 +0000 UTC m=+0.163436782 container init 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, architecture=x86_64, release=1, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd) Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.678107256 +0000 UTC m=+0.180222120 container start 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12) Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.678613629 +0000 UTC m=+0.180728483 container attach 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_id=tripleo_step1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team) Oct 14 04:06:14 localhost systemd[1]: libpod-6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7.scope: Deactivated successfully. Oct 14 04:06:14 localhost podman[55015]: 2025-10-14 08:06:14.687004723 +0000 UTC m=+0.189119627 container died 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2025-07-21T13:07:59, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr_init_logs, release=1, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Oct 14 04:06:14 localhost podman[55035]: 2025-10-14 08:06:14.778257919 +0000 UTC m=+0.076064471 container cleanup 6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:07:59, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12) Oct 14 04:06:14 localhost systemd[1]: libpod-conmon-6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7.scope: Deactivated successfully. Oct 14 04:06:14 localhost python3[54978]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Oct 14 04:06:15 localhost podman[55108]: 2025-10-14 08:06:15.22910555 +0000 UTC m=+0.083230272 container create 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 14 04:06:15 localhost systemd[1]: Started libpod-conmon-4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.scope. Oct 14 04:06:15 localhost systemd[1]: Started libcrun container. Oct 14 04:06:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc5b6d664010750643235f3f70d195ea350655d57182e7e57ebfd557533d6a2/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 14 04:06:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc5b6d664010750643235f3f70d195ea350655d57182e7e57ebfd557533d6a2/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 14 04:06:15 localhost podman[55108]: 2025-10-14 08:06:15.188694701 +0000 UTC m=+0.042819433 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:06:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:06:15 localhost podman[55108]: 2025-10-14 08:06:15.317497128 +0000 UTC m=+0.171621900 container init 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 14 04:06:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:06:15 localhost podman[55108]: 2025-10-14 08:06:15.355836592 +0000 UTC m=+0.209961304 container start 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1) Oct 14 04:06:15 localhost python3[54978]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b495038a864008964602910aa3c03fe1 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 14 04:06:15 localhost podman[55131]: 2025-10-14 08:06:15.453159709 +0000 UTC m=+0.083994192 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:06:15 localhost systemd[1]: tmp-crun.oEnpEt.mount: Deactivated successfully. Oct 14 04:06:15 localhost systemd[1]: var-lib-containers-storage-overlay-8c5b531ba48535632b40540aa07cee707004fde63b53fdfb79d721331dbc1eb8-merged.mount: Deactivated successfully. Oct 14 04:06:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6593c23de12d102a5f49b8c8163a4da787c1977b58f4d890fc73d42b3a21e7e7-userdata-shm.mount: Deactivated successfully. Oct 14 04:06:15 localhost podman[55131]: 2025-10-14 08:06:15.712432357 +0000 UTC m=+0.343266760 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 14 04:06:15 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:06:16 localhost python3[55207]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:16 localhost python3[55223]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:06:17 localhost python3[55284]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429176.3752503-85158-54779433492168/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:17 localhost python3[55300]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 04:06:17 localhost systemd[1]: Reloading. Oct 14 04:06:17 localhost systemd-rc-local-generator[55328]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:17 localhost systemd-sysv-generator[55331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:18 localhost python3[55352]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:06:19 localhost systemd[1]: Reloading. Oct 14 04:06:19 localhost systemd-rc-local-generator[55378]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:06:19 localhost systemd-sysv-generator[55381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:06:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:06:19 localhost systemd[1]: Starting metrics_qdr container... Oct 14 04:06:19 localhost systemd[1]: Started metrics_qdr container. Oct 14 04:06:20 localhost python3[55434]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:21 localhost python3[55555]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005486731 step=1 update_config_hash_only=False Oct 14 04:06:22 localhost python3[55571]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:06:22 localhost python3[55587]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:06:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:06:46 localhost podman[55664]: 2025-10-14 08:06:46.535468035 +0000 UTC m=+0.078442065 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, release=1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:06:46 localhost podman[55664]: 2025-10-14 08:06:46.715664194 +0000 UTC m=+0.258638194 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 04:06:46 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:07:17 localhost podman[55692]: 2025-10-14 08:07:17.536442323 +0000 UTC m=+0.078528067 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, managed_by=tripleo_ansible) Oct 14 04:07:17 localhost podman[55692]: 2025-10-14 08:07:17.736757138 +0000 UTC m=+0.278842812 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, config_id=tripleo_step1, architecture=x86_64, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., release=1) Oct 14 04:07:17 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:07:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:07:48 localhost podman[55797]: 2025-10-14 08:07:48.533541364 +0000 UTC m=+0.078453909 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 04:07:48 localhost podman[55797]: 2025-10-14 08:07:48.693084771 +0000 UTC m=+0.237997306 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:07:48 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:08:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:08:19 localhost systemd[1]: tmp-crun.kaYYnY.mount: Deactivated successfully. Oct 14 04:08:19 localhost podman[55826]: 2025-10-14 08:08:19.530699211 +0000 UTC m=+0.075442858 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9) Oct 14 04:08:19 localhost podman[55826]: 2025-10-14 08:08:19.769258792 +0000 UTC m=+0.314002459 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:08:19 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:08:50 localhost systemd[1]: tmp-crun.Uvgn55.mount: Deactivated successfully. Oct 14 04:08:50 localhost podman[55931]: 2025-10-14 08:08:50.545462141 +0000 UTC m=+0.086256119 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, vcs-type=git, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:08:50 localhost podman[55931]: 2025-10-14 08:08:50.740065669 +0000 UTC m=+0.280859607 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible) Oct 14 04:08:50 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:09:21 localhost systemd[1]: tmp-crun.aUy9fZ.mount: Deactivated successfully. Oct 14 04:09:21 localhost podman[55960]: 2025-10-14 08:09:21.547816986 +0000 UTC m=+0.080823972 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:09:21 localhost podman[55960]: 2025-10-14 08:09:21.764153369 +0000 UTC m=+0.297160375 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.33.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:09:21 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:09:52 localhost podman[56066]: 2025-10-14 08:09:52.534272653 +0000 UTC m=+0.078559317 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 14 04:09:52 localhost podman[56066]: 2025-10-14 08:09:52.763242611 +0000 UTC m=+0.307529265 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, config_id=tripleo_step1, distribution-scope=public) Oct 14 04:09:52 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:10:23 localhost systemd[1]: tmp-crun.ksUP75.mount: Deactivated successfully. Oct 14 04:10:23 localhost podman[56094]: 2025-10-14 08:10:23.581657532 +0000 UTC m=+0.112386714 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 14 04:10:23 localhost podman[56094]: 2025-10-14 08:10:23.766277402 +0000 UTC m=+0.297006534 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:10:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:10:54 localhost podman[56202]: 2025-10-14 08:10:54.544558987 +0000 UTC m=+0.083162531 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 04:10:54 localhost podman[56202]: 2025-10-14 08:10:54.733183773 +0000 UTC m=+0.271787317 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9) Oct 14 04:10:54 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:11:25 localhost podman[56233]: 2025-10-14 08:11:25.540347027 +0000 UTC m=+0.081385073 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:11:25 localhost podman[56233]: 2025-10-14 08:11:25.745311261 +0000 UTC m=+0.286349357 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:11:25 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:11:34 localhost ceph-osd[31330]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [5,2,3] r=1 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:11:35 localhost ceph-osd[31330]: osd.2 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2,1,0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:11:36 localhost ceph-osd[31330]: osd.2 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2,1,0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:11:38 localhost ceph-osd[31330]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2,3,1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:11:38 localhost ceph-osd[31330]: osd.2 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [3,5,2] r=2 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:11:39 localhost ceph-osd[31330]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2,3,1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:11:52 localhost ceph-osd[31330]: osd.2 pg_epoch: 28 pg[6.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [0,5,2] r=2 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:11:54 localhost ceph-osd[32282]: osd.4 pg_epoch: 29 pg[7.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [5,4,3] r=1 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:11:56 localhost podman[56354]: 2025-10-14 08:11:56.020275968 +0000 UTC m=+0.080014621 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59) Oct 14 04:11:56 localhost podman[56354]: 2025-10-14 08:11:56.211074534 +0000 UTC m=+0.270813227 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Oct 14 04:11:56 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:12:15 localhost ceph-osd[31330]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.150734901s) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 1181.745849609s@ mbc={}] start_peering_interval up [5,2,3] -> [5,2,3], acting [5,2,3] -> [5,2,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:15 localhost ceph-osd[31330]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.147550583s) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1181.745849609s@ mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:16 localhost ceph-osd[31330]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [5,2,3] r=1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.010330200s) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612182617s@ mbc={}] start_peering_interval up [5,2,3] -> [2,5,3], acting [5,2,3] -> [2,5,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.010257721s) [5,2,0] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612182617s@ mbc={}] start_peering_interval up [5,2,3] -> [5,2,0], acting [5,2,3] -> [5,2,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.010149002s) [4,0,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612060547s@ mbc={}] start_peering_interval up [5,2,3] -> [4,0,5], acting [5,2,3] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.010330200s) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.612182617s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009452820s) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611328125s@ mbc={}] start_peering_interval up [5,2,3] -> [2,3,1], acting [5,2,3] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.010000229s) [4,0,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.612060547s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009452820s) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.611328125s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009260178s) [5,3,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611694336s@ mbc={}] start_peering_interval up [5,2,3] -> [5,3,2], acting [5,2,3] -> [5,3,2], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009812355s) [5,2,0] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.612182617s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009221077s) [5,3,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611694336s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009237289s) [2,0,5] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611694336s@ mbc={}] start_peering_interval up [5,2,3] -> [2,0,5], acting [5,2,3] -> [2,0,5], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009237289s) [2,0,5] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.611694336s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008852959s) [2,1,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611328125s@ mbc={}] start_peering_interval up [5,2,3] -> [2,1,0], acting [5,2,3] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008852959s) [2,1,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.611328125s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008829117s) [4,1,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611572266s@ mbc={}] start_peering_interval up [5,2,3] -> [4,1,3], acting [5,2,3] -> [4,1,3], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008778572s) [4,1,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611572266s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008972168s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611572266s@ mbc={}] start_peering_interval up [5,2,3] -> [3,4,5], acting [5,2,3] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009176254s) [5,4,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612060547s@ mbc={}] start_peering_interval up [5,2,3] -> [5,4,0], acting [5,2,3] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009016037s) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611816406s@ mbc={}] start_peering_interval up [5,2,3] -> [2,0,1], acting [5,2,3] -> [2,0,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008329391s) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611328125s@ mbc={}] start_peering_interval up [5,2,3] -> [2,3,1], acting [5,2,3] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008877754s) [4,5,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611938477s@ mbc={}] start_peering_interval up [5,2,3] -> [4,5,0], acting [5,2,3] -> [4,5,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008568764s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611572266s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009016037s) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.611816406s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008329391s) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.611328125s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008794785s) [4,5,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611938477s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008050919s) [3,1,4] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611206055s@ mbc={}] start_peering_interval up [5,2,3] -> [3,1,4], acting [5,2,3] -> [3,1,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.008003235s) [3,1,4] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611206055s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.009097099s) [5,4,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.612060547s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.007251740s) [5,3,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.610595703s@ mbc={}] start_peering_interval up [5,2,3] -> [5,3,2], acting [5,2,3] -> [5,3,2], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.007224083s) [5,3,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.610595703s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=35 pruub=15.726684570s) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active pruub 1191.330200195s@ mbc={}] start_peering_interval up [2,1,0] -> [2,1,0], acting [2,1,0] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.002687454s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.606323242s@ mbc={}] start_peering_interval up [5,2,3] -> [3,4,5], acting [5,2,3] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.002633095s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.606323242s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006816864s) [1,2,3] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.610473633s@ mbc={}] start_peering_interval up [5,2,3] -> [1,2,3], acting [5,2,3] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006765366s) [1,2,3] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.610473633s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=8.944773674s) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 1184.548706055s@ mbc={}] start_peering_interval up [3,5,2] -> [3,5,2], acting [3,5,2] -> [3,5,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006557465s) [3,2,1] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.610595703s@ mbc={}] start_peering_interval up [5,2,3] -> [3,2,1], acting [5,2,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006514549s) [3,2,1] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.610595703s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.001403809s) [1,2,0] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.605468750s@ mbc={}] start_peering_interval up [5,2,3] -> [1,2,0], acting [5,2,3] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.001347542s) [1,2,0] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.605468750s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.005814552s) [5,0,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.610107422s@ mbc={}] start_peering_interval up [5,2,3] -> [5,0,2], acting [5,2,3] -> [5,0,2], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.005788803s) [5,0,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.610107422s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006892204s) [3,2,1] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611206055s@ mbc={}] start_peering_interval up [5,2,3] -> [3,2,1], acting [5,2,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006824493s) [3,2,1] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611206055s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.007015228s) [4,1,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612060547s@ mbc={}] start_peering_interval up [5,2,3] -> [4,1,3], acting [5,2,3] -> [4,1,3], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000403404s) [3,5,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.605468750s@ mbc={}] start_peering_interval up [5,2,3] -> [3,5,2], acting [5,2,3] -> [3,5,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000343323s) [3,5,2] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.605468750s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006912231s) [4,1,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.612060547s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.005898476s) [5,4,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.611206055s@ mbc={}] start_peering_interval up [5,2,3] -> [5,4,3], acting [5,2,3] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000577927s) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.605957031s@ mbc={}] start_peering_interval up [5,2,3] -> [2,5,3], acting [5,2,3] -> [2,5,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.005856514s) [5,4,3] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.611206055s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000577927s) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.605957031s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.004306793s) [4,1,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.609985352s@ mbc={}] start_peering_interval up [5,2,3] -> [4,1,0], acting [5,2,3] -> [4,1,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000896454s) [4,5,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.606689453s@ mbc={}] start_peering_interval up [5,2,3] -> [4,5,0], acting [5,2,3] -> [4,5,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006368637s) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.612182617s@ mbc={}] start_peering_interval up [5,2,3] -> [2,0,1], acting [5,2,3] -> [2,0,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.006368637s) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1190.612182617s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000841141s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.607055664s@ mbc={}] start_peering_interval up [5,2,3] -> [3,4,5], acting [5,2,3] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000794411s) [3,4,5] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.607055664s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.004261971s) [4,1,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.609985352s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000660896s) [0,1,4] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 1190.607177734s@ mbc={}] start_peering_interval up [5,2,3] -> [0,1,4], acting [5,2,3] -> [0,1,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=15.000036240s) [4,5,0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.606689453s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=14.999842644s) [0,1,4] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.607177734s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=35 pruub=15.726684570s) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.330200195s@ mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[31330]: osd.2 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=8.938887596s) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1184.548706055s@ mbc={}] state: transitioning to Stray Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,5,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,0,5] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:17 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,5,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [0,1,4] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1e( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1f( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1c( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1d( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1a( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.18( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.19( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1b( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.5( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.e( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [3,4,5] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.7( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.3( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.4( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.6( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.2( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.9( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.8( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.b( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.a( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.d( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [3,1,4] r=2 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.c( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.f( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.e( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.8( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.11( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.16( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.17( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.10( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.13( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.12( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.14( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.15( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.15( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.13( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.17( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.12( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.10( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.16( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [3,4,5] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.5( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.9( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.6( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.11( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.14( empty local-lis/les=20/21 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.7( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.2( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.3( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1e( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [3,4,5] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.4( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.18( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.1b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[4.19( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [3,5,2] r=2 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.14( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [5,4,0] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 35 pg[2.1a( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [5,4,3] r=1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.0( empty local-lis/les=35/36 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,1,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,5,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,0,5] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,0,5] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,5,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,5,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,3] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[32282]: osd.4 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [4,1,0] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,3,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=35) [2,0,1] r=0 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:18 localhost ceph-osd[31330]: osd.2 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=20/20 les/c/f=21/21/0 sis=35) [2,1,0] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:19 localhost ceph-osd[31330]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=8.176515579s) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 1185.829711914s@ mbc={}] start_peering_interval up [2,3,1] -> [2,3,1], acting [2,3,1] -> [2,3,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:19 localhost ceph-osd[31330]: osd.2 pg_epoch: 37 pg[6.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=13.249021530s) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active pruub 1190.902343750s@ mbc={}] start_peering_interval up [0,5,2] -> [0,5,2], acting [0,5,2] -> [0,5,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:19 localhost ceph-osd[31330]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=8.176515579s) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1185.829711914s@ mbc={}] state: transitioning to Primary Oct 14 04:12:19 localhost ceph-osd[31330]: osd.2 pg_epoch: 37 pg[6.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=13.244205475s) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1190.902343750s@ mbc={}] state: transitioning to Stray Oct 14 04:12:19 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.1b scrub starts Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.12( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.13( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.10( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.16( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.15( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.14( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.9( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.8( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.17( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.7( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.4( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.5( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.11( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.6( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.2( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.1a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.19( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.3( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[6.18( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [0,5,2] r=2 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:20 localhost ceph-osd[31330]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [2,3,1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:21 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.18 scrub starts Oct 14 04:12:21 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.18 scrub ok Oct 14 04:12:21 localhost ceph-osd[32282]: osd.4 pg_epoch: 39 pg[7.0( v 31'39 (0'0,31'39] local-lis/les=29/30 n=22 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=39 pruub=12.500973701s) [5,4,3] r=1 lpr=39 pi=[29,39)/1 luod=0'0 lua=31'37 crt=31'39 lcod 31'38 mlcod 0'0 active pruub 1188.354370117s@ mbc={}] start_peering_interval up [5,4,3] -> [5,4,3], acting [5,4,3] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:21 localhost ceph-osd[32282]: osd.4 pg_epoch: 39 pg[7.0( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=39 pruub=12.498695374s) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 lcod 31'38 mlcod 0'0 unknown NOTIFY pruub 1188.354370117s@ mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.1( v 31'39 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=1 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:22 localhost ceph-osd[32282]: osd.4 pg_epoch: 40 pg[7.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=29/30 n=2 ec=39/29 lis/c=29/29 les/c/f=30/30/0 sis=39) [5,4,3] r=1 lpr=39 pi=[29,39)/1 crt=31'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 14 04:12:25 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.d scrub starts Oct 14 04:12:25 localhost python3[56428]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:25 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.d scrub ok Oct 14 04:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:12:26 localhost podman[56429]: 2025-10-14 08:12:26.54089063 +0000 UTC m=+0.084925875 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:12:26 localhost podman[56429]: 2025-10-14 08:12:26.70833097 +0000 UTC m=+0.252366145 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd) Oct 14 04:12:26 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:12:27 localhost python3[56475]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:27 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.b scrub starts Oct 14 04:12:27 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.b scrub ok Oct 14 04:12:27 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.5 scrub starts Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807237625s) [0,5,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639282227s@ mbc={}] start_peering_interval up [2,1,0] -> [0,5,4], acting [2,1,0] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806879997s) [0,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639038086s@ mbc={}] start_peering_interval up [2,1,0] -> [0,1,2], acting [2,1,0] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806787491s) [0,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639038086s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876507759s) [2,1,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.708862305s@ mbc={}] start_peering_interval up [0,5,2] -> [2,1,0], acting [0,5,2] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876507759s) [2,1,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.708862305s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807166100s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639892578s@ mbc={}] start_peering_interval up [2,1,0] -> [3,2,5], acting [2,1,0] -> [3,2,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807131767s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639892578s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.814663887s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647583008s@ mbc={}] start_peering_interval up [3,5,2] -> [2,3,1], acting [3,5,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877632141s) [1,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710693359s@ mbc={}] start_peering_interval up [2,3,1] -> [1,2,3], acting [2,3,1] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877593040s) [1,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710693359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.16( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877986908s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710815430s@ mbc={}] start_peering_interval up [0,5,2] -> [0,1,4], acting [0,5,2] -> [0,1,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.814663887s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.647583008s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875976562s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709228516s@ mbc={}] start_peering_interval up [0,5,2] -> [5,2,0], acting [0,5,2] -> [5,2,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806601524s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640136719s@ mbc={}] start_peering_interval up [2,1,0] -> [0,1,4], acting [2,1,0] -> [0,1,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806569099s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640136719s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875730515s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709228516s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876915932s) [0,5,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710693359s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,2], acting [2,3,1] -> [0,5,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.19( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874947548s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.708740234s@ mbc={}] start_peering_interval up [0,5,2] -> [5,3,4], acting [0,5,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876815796s) [0,5,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710693359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.19( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874864578s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.708740234s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.814733505s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.648437500s@ mbc={}] start_peering_interval up [3,5,2] -> [5,3,2], acting [3,5,2] -> [5,3,2], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.813559532s) [1,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647705078s@ mbc={}] start_peering_interval up [3,5,2] -> [1,3,2], acting [3,5,2] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.805199623s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639282227s@ mbc={}] start_peering_interval up [2,1,0] -> [5,3,2], acting [2,1,0] -> [5,3,2], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.813514709s) [1,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.805167198s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639282227s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.16( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877185822s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710815430s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.814462662s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.648437500s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876263618s) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710815430s@ mbc={}] start_peering_interval up [2,3,1] -> [2,1,3], acting [2,3,1] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.876263618s) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.710815430s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874571800s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709228516s@ mbc={}] start_peering_interval up [0,5,2] -> [0,1,4], acting [0,5,2] -> [0,1,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806089401s) [0,5,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [0,5,4], acting [2,1,0] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807124138s) [0,5,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639282227s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.805697441s) [0,5,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640502930s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.10( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875863075s) [0,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710815430s@ mbc={}] start_peering_interval up [0,5,2] -> [0,2,5], acting [0,5,2] -> [0,2,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804161072s) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639160156s@ mbc={}] start_peering_interval up [2,1,0] -> [2,5,3], acting [2,1,0] -> [2,5,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.10( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875794411s) [0,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710815430s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804161072s) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.639160156s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874173164s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709228516s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.812797546s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647949219s@ mbc={}] start_peering_interval up [3,5,2] -> [5,3,4], acting [3,5,2] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.812713623s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647949219s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804032326s) [4,3,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639404297s@ mbc={}] start_peering_interval up [2,1,0] -> [4,3,1], acting [2,1,0] -> [4,3,1], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.812289238s) [2,1,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647705078s@ mbc={}] start_peering_interval up [3,5,2] -> [2,1,3], acting [3,5,2] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803938866s) [4,3,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639404297s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.812289238s) [2,1,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.647705078s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.878720284s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714355469s@ mbc={}] start_peering_interval up [2,3,1] -> [5,2,0], acting [2,3,1] -> [5,2,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.878684998s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714355469s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873060226s) [4,5,3] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.708862305s@ mbc={}] start_peering_interval up [0,5,2] -> [4,5,3], acting [0,5,2] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.872925758s) [4,5,3] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.708862305s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.879990578s) [1,0,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715820312s@ mbc={}] start_peering_interval up [2,3,1] -> [1,0,4], acting [2,3,1] -> [1,0,4], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.879840851s) [1,0,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715820312s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874288559s) [3,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709960938s@ mbc={}] start_peering_interval up [0,5,2] -> [3,4,1], acting [0,5,2] -> [3,4,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803906441s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640014648s@ mbc={}] start_peering_interval up [2,1,0] -> [4,5,3], acting [2,1,0] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.811502457s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647583008s@ mbc={}] start_peering_interval up [3,5,2] -> [2,3,1], acting [3,5,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.811502457s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.647583008s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803833961s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640014648s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875039101s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.711303711s@ mbc={}] start_peering_interval up [0,5,2] -> [3,4,5], acting [0,5,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874224663s) [0,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710693359s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,4], acting [2,3,1] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803984642s) [2,5,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [2,5,0], acting [2,1,0] -> [2,5,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803984642s) [2,5,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.640502930s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874174118s) [0,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710693359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873753548s) [3,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709960938s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.874082565s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710693359s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,4], acting [2,3,1] -> [3,1,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.810735703s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647583008s@ mbc={}] start_peering_interval up [3,5,2] -> [4,5,3], acting [3,5,2] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.810678482s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647583008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802208900s) [3,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.639160156s@ mbc={}] start_peering_interval up [2,1,0] -> [3,4,1], acting [2,1,0] -> [3,4,1], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875003815s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.711303711s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873545647s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710693359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802029610s) [3,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.639160156s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.878145218s) [1,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715454102s@ mbc={}] start_peering_interval up [2,3,1] -> [1,2,3], acting [2,3,1] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873543739s) [4,3,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710937500s@ mbc={}] start_peering_interval up [0,5,2] -> [4,3,5], acting [0,5,2] -> [4,3,5], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.878115654s) [1,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715454102s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802778244s) [3,2,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640258789s@ mbc={}] start_peering_interval up [2,1,0] -> [3,2,1], acting [2,1,0] -> [3,2,1], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873484612s) [4,3,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710937500s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871861458s) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709350586s@ mbc={}] start_peering_interval up [0,5,2] -> [2,1,3], acting [0,5,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802721977s) [3,2,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640258789s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.809692383s) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647338867s@ mbc={}] start_peering_interval up [3,5,2] -> [2,5,3], acting [3,5,2] -> [2,5,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.809692383s) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.647338867s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877159119s) [5,0,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714965820s@ mbc={}] start_peering_interval up [2,3,1] -> [5,0,2], acting [2,3,1] -> [5,0,2], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.872859001s) [3,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710693359s@ mbc={}] start_peering_interval up [0,5,2] -> [3,5,4], acting [0,5,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.877104759s) [5,0,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714965820s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871861458s) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.709350586s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.872814178s) [3,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710693359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.809581757s) [0,5,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.646728516s@ mbc={}] start_peering_interval up [3,5,2] -> [0,5,2], acting [3,5,2] -> [0,5,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.808568001s) [0,5,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.646728516s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875909805s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714233398s@ mbc={}] start_peering_interval up [2,3,1] -> [0,1,4], acting [2,3,1] -> [0,1,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875871658s) [0,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714233398s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802170753s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640625000s@ mbc={}] start_peering_interval up [2,1,0] -> [5,3,4], acting [2,1,0] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802130699s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640625000s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802321434s) [5,0,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [5,0,4], acting [2,1,0] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870904922s) [1,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709472656s@ mbc={}] start_peering_interval up [0,5,2] -> [1,3,4], acting [0,5,2] -> [1,3,4], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870882034s) [1,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709472656s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875643730s) [0,1,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714355469s@ mbc={}] start_peering_interval up [2,3,1] -> [0,1,2], acting [2,3,1] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.809516907s) [0,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647705078s@ mbc={}] start_peering_interval up [3,5,2] -> [0,4,1], acting [3,5,2] -> [0,4,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.809028625s) [0,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875573158s) [0,1,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714355469s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.808341980s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647216797s@ mbc={}] start_peering_interval up [3,5,2] -> [4,5,3], acting [3,5,2] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801663399s) [5,0,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640502930s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875137329s) [1,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714233398s@ mbc={}] start_peering_interval up [2,3,1] -> [1,3,4], acting [2,3,1] -> [1,3,4], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870110512s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709228516s@ mbc={}] start_peering_interval up [0,5,2] -> [5,2,0], acting [0,5,2] -> [5,2,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870084763s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709228516s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.875082016s) [1,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714233398s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.808281898s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647216797s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801443100s) [3,2,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640380859s@ mbc={}] start_peering_interval up [2,1,0] -> [3,2,1], acting [2,1,0] -> [3,2,1], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807930946s) [4,1,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647338867s@ mbc={}] start_peering_interval up [3,5,2] -> [4,1,0], acting [3,5,2] -> [4,1,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.807873726s) [4,1,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647338867s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801110268s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [0,1,4], acting [2,1,0] -> [0,1,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800819397s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640502930s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800996780s) [3,2,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640380859s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870215416s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709960938s@ mbc={}] start_peering_interval up [0,5,2] -> [3,1,4], acting [0,5,2] -> [3,1,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801108360s) [0,5,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.641113281s@ mbc={}] start_peering_interval up [2,1,0] -> [0,5,2], acting [2,1,0] -> [0,5,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869937897s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709960938s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801022530s) [0,5,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.641113281s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869019508s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709350586s@ mbc={}] start_peering_interval up [0,5,2] -> [5,2,0], acting [0,5,2] -> [5,2,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873839378s) [3,5,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714233398s@ mbc={}] start_peering_interval up [2,3,1] -> [3,5,2], acting [2,3,1] -> [3,5,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873809814s) [3,5,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714233398s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804679871s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645019531s@ mbc={}] start_peering_interval up [3,5,2] -> [2,3,1], acting [3,5,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804679871s) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1200.645019531s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868813515s) [5,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709350586s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868483543s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709350586s@ mbc={}] start_peering_interval up [0,5,2] -> [5,3,4], acting [0,5,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873862267s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714721680s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,4], acting [2,3,1] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868434906s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709350586s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806626320s) [0,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647583008s@ mbc={}] start_peering_interval up [3,5,2] -> [0,2,5], acting [3,5,2] -> [0,2,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.799945831s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640991211s@ mbc={}] start_peering_interval up [2,1,0] -> [3,2,5], acting [2,1,0] -> [3,2,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873803139s) [5,3,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714721680s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.806520462s) [0,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647583008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804054260s) [1,2,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645263672s@ mbc={}] start_peering_interval up [3,5,2] -> [1,2,0], acting [3,5,2] -> [1,2,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804019928s) [1,2,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645263672s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.799915314s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640991211s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867938995s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709350586s@ mbc={}] start_peering_interval up [0,5,2] -> [3,1,4], acting [0,5,2] -> [3,1,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867856979s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709350586s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798669815s) [4,1,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640380859s@ mbc={}] start_peering_interval up [2,1,0] -> [4,1,3], acting [2,1,0] -> [4,1,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.805936813s) [1,4,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647705078s@ mbc={}] start_peering_interval up [3,5,2] -> [1,4,0], acting [3,5,2] -> [1,4,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873619080s) [4,3,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715209961s@ mbc={}] start_peering_interval up [2,3,1] -> [4,3,5], acting [2,3,1] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798604012s) [4,1,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640380859s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867847443s) [1,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709594727s@ mbc={}] start_peering_interval up [0,5,2] -> [1,3,2], acting [0,5,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.873492241s) [4,3,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715209961s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867804527s) [1,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709594727s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.805744171s) [1,4,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,1,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798231125s) [4,0,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [4,0,5], acting [2,1,0] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798181534s) [4,0,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640502930s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802708626s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.644897461s@ mbc={}] start_peering_interval up [3,5,2] -> [3,2,5], acting [3,5,2] -> [3,2,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802492142s) [3,2,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.644897461s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868889809s) [5,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.711059570s@ mbc={}] start_peering_interval up [2,3,1] -> [5,2,3], acting [2,3,1] -> [5,2,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867536545s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710327148s@ mbc={}] start_peering_interval up [0,5,2] -> [5,3,2], acting [0,5,2] -> [5,3,2], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797667503s) [3,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640380859s@ mbc={}] start_peering_interval up [2,1,0] -> [3,1,4], acting [2,1,0] -> [3,1,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804486275s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647338867s@ mbc={}] start_peering_interval up [3,5,2] -> [5,3,4], acting [3,5,2] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867451668s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710327148s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.804447174s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647338867s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797504425s) [3,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640380859s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867570877s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710571289s@ mbc={}] start_peering_interval up [0,5,2] -> [3,4,5], acting [0,5,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871370316s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714477539s@ mbc={}] start_peering_interval up [2,3,1] -> [4,0,1], acting [2,3,1] -> [4,0,1], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798202515s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.641357422s@ mbc={}] start_peering_interval up [2,1,0] -> [5,3,2], acting [2,1,0] -> [5,3,2], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871310234s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714477539s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868002892s) [5,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.711059570s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871250153s) [2,5,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714599609s@ mbc={}] start_peering_interval up [2,3,1] -> [2,5,0], acting [2,3,1] -> [2,5,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803715706s) [1,4,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647094727s@ mbc={}] start_peering_interval up [3,5,2] -> [1,4,3], acting [3,5,2] -> [1,4,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798158646s) [5,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.641357422s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.803691864s) [1,4,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647094727s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871250153s) [2,5,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.714599609s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867539406s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710571289s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,1,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866491318s) [5,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710083008s@ mbc={}] start_peering_interval up [0,5,2] -> [5,2,3], acting [0,5,2] -> [5,2,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871836662s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715454102s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,4], acting [2,3,1] -> [3,1,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866341591s) [5,2,3] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710083008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797378540s) [5,2,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.641235352s@ mbc={}] start_peering_interval up [2,1,0] -> [5,2,3], acting [2,1,0] -> [5,2,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.871577263s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715454102s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797304153s) [5,2,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.641235352s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870429993s) [2,0,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714599609s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,5], acting [2,3,1] -> [2,0,5], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870429993s) [2,0,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.714599609s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797024727s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.641235352s@ mbc={}] start_peering_interval up [2,1,0] -> [5,3,4], acting [2,1,0] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801128387s) [1,0,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645385742s@ mbc={}] start_peering_interval up [3,5,2] -> [1,0,2], acting [3,5,2] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802946091s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.647338867s@ mbc={}] start_peering_interval up [3,5,2] -> [0,1,4], acting [3,5,2] -> [0,1,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.801004410s) [1,0,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645385742s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.802909851s) [0,1,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.647338867s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.796903610s) [5,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.641235352s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.865562439s) [0,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710083008s@ mbc={}] start_peering_interval up [0,5,2] -> [0,2,5], acting [0,5,2] -> [0,2,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.796661377s) [5,4,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.641235352s@ mbc={}] start_peering_interval up [2,1,0] -> [5,4,0], acting [2,1,0] -> [5,4,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.865278244s) [5,0,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.709960938s@ mbc={}] start_peering_interval up [0,5,2] -> [5,0,4], acting [0,5,2] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.796565056s) [5,4,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.641235352s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869921684s) [0,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.714721680s@ mbc={}] start_peering_interval up [2,3,1] -> [0,4,1], acting [2,3,1] -> [0,4,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.865229607s) [5,0,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.709960938s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869869232s) [0,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.714721680s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800126076s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645019531s@ mbc={}] start_peering_interval up [3,5,2] -> [4,5,3], acting [3,5,2] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.870033264s) [1,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715087891s@ mbc={}] start_peering_interval up [2,3,1] -> [1,2,0], acting [2,3,1] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800094604s) [4,5,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645019531s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,5,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800186157s) [4,0,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645507812s@ mbc={}] start_peering_interval up [3,5,2] -> [4,0,1], acting [3,5,2] -> [4,0,1], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.864713669s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710083008s@ mbc={}] start_peering_interval up [0,5,2] -> [3,1,4], acting [0,5,2] -> [3,1,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.865331650s) [0,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710083008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.800123215s) [4,0,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645507812s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.864658356s) [3,1,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710083008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869624138s) [2,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715454102s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,1], acting [2,3,1] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869624138s) [2,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1194.715454102s@ mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.869861603s) [1,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715087891s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.864163399s) [3,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710083008s@ mbc={}] start_peering_interval up [0,5,2] -> [3,5,4], acting [0,5,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.794033051s) [4,5,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640136719s@ mbc={}] start_peering_interval up [2,1,0] -> [4,5,0], acting [2,1,0] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.864114761s) [3,5,4] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710083008s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.794004440s) [4,5,0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640136719s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868805885s) [3,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715087891s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,5], acting [2,3,1] -> [3,2,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.799334526s) [0,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645751953s@ mbc={}] start_peering_interval up [3,5,2] -> [0,1,2], acting [3,5,2] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868773460s) [3,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715087891s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,5] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.799242020s) [0,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645751953s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.864015579s) [5,4,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710571289s@ mbc={}] start_peering_interval up [0,5,2] -> [5,4,0], acting [0,5,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.863993645s) [5,4,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710571289s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798442841s) [3,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645507812s@ mbc={}] start_peering_interval up [3,5,2] -> [3,1,2], acting [3,5,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798416138s) [3,1,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645507812s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868550301s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715698242s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,2], acting [2,3,1] -> [5,3,2], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.868528366s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715698242s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.793752670s) [1,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640991211s@ mbc={}] start_peering_interval up [2,1,0] -> [1,3,2], acting [2,1,0] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.793722153s) [1,3,2] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640991211s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867890358s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715209961s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,2], acting [2,3,1] -> [5,3,2], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798082352s) [4,0,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645507812s@ mbc={}] start_peering_interval up [3,5,2] -> [4,0,5], acting [3,5,2] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.792609215s) [1,4,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640136719s@ mbc={}] start_peering_interval up [2,1,0] -> [1,4,3], acting [2,1,0] -> [1,4,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.863943100s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.711425781s@ mbc={}] start_peering_interval up [0,5,2] -> [4,0,1], acting [0,5,2] -> [4,0,1], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.863832474s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.711425781s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798026085s) [4,0,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645507812s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,3,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797921181s) [4,3,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645751953s@ mbc={}] start_peering_interval up [3,5,2] -> [4,3,1], acting [3,5,2] -> [4,3,1], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797888756s) [4,3,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645751953s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867805481s) [5,3,2] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715209961s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867226601s) [3,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715209961s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,5], acting [2,3,1] -> [3,2,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867246628s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715209961s@ mbc={}] start_peering_interval up [2,3,1] -> [4,0,1], acting [2,3,1] -> [4,0,1], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867180824s) [3,2,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715209961s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867215157s) [4,0,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715209961s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.792402267s) [1,4,3] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640136719s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.792206764s) [1,2,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640502930s@ mbc={}] start_peering_interval up [2,1,0] -> [1,2,0], acting [2,1,0] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798411369s) [5,2,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.646606445s@ mbc={}] start_peering_interval up [3,5,2] -> [5,2,3], acting [3,5,2] -> [5,2,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.792175293s) [1,2,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640502930s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798350334s) [5,2,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.646606445s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867248535s) [4,5,3] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715698242s@ mbc={}] start_peering_interval up [2,3,1] -> [4,5,3], acting [2,3,1] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.867224693s) [4,5,3] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715698242s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.862972260s) [3,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.711303711s@ mbc={}] start_peering_interval up [0,5,2] -> [3,4,1], acting [0,5,2] -> [3,4,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798250198s) [3,4,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.646728516s@ mbc={}] start_peering_interval up [3,5,2] -> [3,4,5], acting [3,5,2] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.12( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.862236023s) [4,1,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.710815430s@ mbc={}] start_peering_interval up [0,5,2] -> [4,1,0], acting [0,5,2] -> [4,1,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.798229218s) [3,4,5] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.646728516s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,5,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.12( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.862196922s) [4,1,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.710815430s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.797763824s) [0,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.645874023s@ mbc={}] start_peering_interval up [3,5,2] -> [0,4,1], acting [3,5,2] -> [0,4,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.862734795s) [3,4,1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.711303711s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.796929359s) [0,4,1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.645874023s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.791100502s) [1,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active pruub 1200.640136719s@ mbc={}] start_peering_interval up [2,1,0] -> [1,3,4], acting [2,1,0] -> [1,3,4], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866587639s) [1,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715698242s@ mbc={}] start_peering_interval up [2,3,1] -> [1,2,0], acting [2,3,1] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=14.791046143s) [1,3,4] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1200.640136719s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866559029s) [1,2,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715698242s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866321564s) [5,4,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.715698242s@ mbc={}] start_peering_interval up [2,3,1] -> [5,4,0], acting [2,3,1] -> [5,4,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.866293907s) [5,4,0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.715698242s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,3,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.861879349s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active pruub 1194.711303711s@ mbc={}] start_peering_interval up [0,5,2] -> [3,4,5], acting [0,5,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:27 localhost ceph-osd[31330]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=8.861722946s) [3,4,5] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1194.711303711s@ mbc={}] state: transitioning to Stray Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.1a( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.12( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,1,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.1f( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.15( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,5] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:12:27 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.5 scrub ok Oct 14 04:12:28 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts Oct 14 04:12:28 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,4,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [3,4,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [0,5,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.10( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [1,4,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,4,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [1,3,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,4,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.10( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,4,0] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,5,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,5,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.16( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [0,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,5,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,4,1] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,3,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,1,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.1b( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [1,0,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.18( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [0,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [0,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,5,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,4,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,4,1] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [1,4,3] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,4,5] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [0,4,1] r=1 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.b( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,1,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,3,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,4,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,0,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,1,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,4,1] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [0,1,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [1,3,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [1,3,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [3,1,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,5,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,0,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [1,4,0] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.b( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [3,1,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,5,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,0,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [3,4,5] r=1 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,3,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,3,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,1,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[6.19( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [5,3,4] r=2 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [2,1,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[31330]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [2,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,3,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 41 pg[4.c( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [5,3,4] r=2 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,3,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,1,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,1,0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,5] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,3,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,5,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,5] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,1,0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,0,1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,0,1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,5,3] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=41) [4,3,5] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost ceph-osd[32282]: osd.4 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=35/20 lis/c=35/35 les/c/f=36/36/0 sis=41) [4,5,3] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:12:28 localhost python3[56491]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.a( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.868582726s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1192.148193359s@ mbc={}] start_peering_interval up [5,4,3] -> [3,2,1], acting [5,4,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.2( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.867513657s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1192.147338867s@ mbc={}] start_peering_interval up [5,4,3] -> [3,2,1], acting [5,4,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.2( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.867416382s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.147338867s@ mbc={}] state: transitioning to Stray Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.a( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.868256569s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.148193359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.867193222s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1192.147949219s@ mbc={}] start_peering_interval up [5,4,3] -> [3,2,1], acting [5,4,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.867094994s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.147949219s@ mbc={}] state: transitioning to Stray Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.6( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.866628647s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1192.147705078s@ mbc={}] start_peering_interval up [5,4,3] -> [3,2,1], acting [5,4,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:29 localhost ceph-osd[32282]: osd.4 pg_epoch: 43 pg[7.6( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.866548538s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.147705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:30 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1d scrub starts Oct 14 04:12:30 localhost ceph-osd[31330]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:30 localhost ceph-osd[31330]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:30 localhost ceph-osd[31330]: osd.2 pg_epoch: 43 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:30 localhost ceph-osd[31330]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:31 localhost python3[56539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.b( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.811924934s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.148071289s@ mbc={}] start_peering_interval up [5,4,3] -> [3,1,4], acting [5,4,3] -> [3,1,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.3( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.810985565s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.147094727s@ mbc={}] start_peering_interval up [5,4,3] -> [3,1,4], acting [5,4,3] -> [3,1,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.812011719s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.148193359s@ mbc={}] start_peering_interval up [5,4,3] -> [3,1,4], acting [5,4,3] -> [3,1,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.b( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.811822891s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.148071289s@ mbc={}] state: transitioning to Stray Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.3( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.810877800s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.147094727s@ mbc={}] state: transitioning to Stray Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.811944008s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.148193359s@ mbc={}] state: transitioning to Stray Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.7( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.810754776s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.147216797s@ mbc={}] start_peering_interval up [5,4,3] -> [3,1,4], acting [5,4,3] -> [3,1,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:31 localhost ceph-osd[32282]: osd.4 pg_epoch: 45 pg[7.7( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=14.810682297s) [3,1,4] r=2 lpr=45 pi=[39,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.147216797s@ mbc={}] state: transitioning to Stray Oct 14 04:12:31 localhost python3[56582]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429550.8227544-92638-159712294746111/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=0991400062f1e3522feec6859340320816889889 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:34 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1c scrub starts Oct 14 04:12:36 localhost python3[56644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:12:36 localhost python3[56687]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429556.1017506-92638-34380851289299/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=ba6c47c4b62a1635e77f10e9e003b0ff16f31619 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:37 localhost ceph-osd[32282]: osd.4 pg_epoch: 47 pg[7.4( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.230521202s) [0,1,2] r=-1 lpr=47 pi=[39,47)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.147705078s@ mbc={}] start_peering_interval up [5,4,3] -> [0,1,2], acting [5,4,3] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:37 localhost ceph-osd[32282]: osd.4 pg_epoch: 47 pg[7.c( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.230117798s) [0,1,2] r=-1 lpr=47 pi=[39,47)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1200.147216797s@ mbc={}] start_peering_interval up [5,4,3] -> [0,1,2], acting [5,4,3] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:37 localhost ceph-osd[32282]: osd.4 pg_epoch: 47 pg[7.c( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.229992867s) [0,1,2] r=-1 lpr=47 pi=[39,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.147216797s@ mbc={}] state: transitioning to Stray Oct 14 04:12:37 localhost ceph-osd[32282]: osd.4 pg_epoch: 47 pg[7.4( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.230356216s) [0,1,2] r=-1 lpr=47 pi=[39,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.147705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:38 localhost ceph-osd[31330]: osd.2 pg_epoch: 47 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47) [0,1,2] r=2 lpr=47 pi=[39,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:38 localhost ceph-osd[31330]: osd.2 pg_epoch: 47 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=47) [0,1,2] r=2 lpr=47 pi=[39,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:38 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts Oct 14 04:12:40 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.a scrub starts Oct 14 04:12:40 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.a scrub ok Oct 14 04:12:41 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.2 scrub starts Oct 14 04:12:41 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.2 scrub ok Oct 14 04:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4911 writes, 22K keys, 4911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4911 writes, 411 syncs, 11.95 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1511 writes, 5943 keys, 1511 commit groups, 1.0 writes per commit group, ingest: 2.07 MB, 0.00 MB/s#012Interval WAL: 1511 writes, 207 syncs, 7.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Oct 14 04:12:41 localhost python3[56749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:12:42 localhost python3[56792]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429561.438608-92638-248911330138202/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=2a2148c4af133c419b7d1e891437641895bee05f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:43 localhost ceph-osd[32282]: osd.4 pg_epoch: 49 pg[7.d( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=11.087769508s) [4,0,1] r=0 lpr=49 pi=[39,49)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1208.146728516s@ mbc={}] start_peering_interval up [5,4,3] -> [4,0,1], acting [5,4,3] -> [4,0,1], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:43 localhost ceph-osd[32282]: osd.4 pg_epoch: 49 pg[7.5( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=11.087645531s) [4,0,1] r=0 lpr=49 pi=[39,49)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1208.146728516s@ mbc={}] start_peering_interval up [5,4,3] -> [4,0,1], acting [5,4,3] -> [4,0,1], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:43 localhost ceph-osd[32282]: osd.4 pg_epoch: 49 pg[7.d( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=11.087769508s) [4,0,1] r=0 lpr=49 pi=[39,49)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown pruub 1208.146728516s@ mbc={}] state: transitioning to Primary Oct 14 04:12:43 localhost ceph-osd[32282]: osd.4 pg_epoch: 49 pg[7.5( v 31'39 (0'0,31'39] local-lis/les=39/40 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=11.087645531s) [4,0,1] r=0 lpr=49 pi=[39,49)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown pruub 1208.146728516s@ mbc={}] state: transitioning to Primary Oct 14 04:12:43 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.17 scrub starts Oct 14 04:12:43 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.17 scrub ok Oct 14 04:12:44 localhost ceph-osd[32282]: osd.4 pg_epoch: 50 pg[7.5( v 31'39 (0'0,31'39] local-lis/les=49/50 n=2 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49) [4,0,1] r=0 lpr=49 pi=[39,49)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Oct 14 04:12:44 localhost ceph-osd[32282]: osd.4 pg_epoch: 50 pg[7.d( v 31'39 (0'0,31'39] local-lis/les=49/50 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=49) [4,0,1] r=0 lpr=49 pi=[39,49)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Oct 14 04:12:45 localhost ceph-osd[31330]: osd.2 pg_epoch: 51 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=9.549541473s) [0,1,4] r=-1 lpr=51 pi=[43,51)/1 luod=0'0 crt=31'39 mlcod 0'0 active pruub 1213.264526367s@ mbc={}] start_peering_interval up [3,2,1] -> [0,1,4], acting [3,2,1] -> [0,1,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:45 localhost ceph-osd[31330]: osd.2 pg_epoch: 51 pg[7.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=9.549113274s) [0,1,4] r=-1 lpr=51 pi=[43,51)/1 luod=0'0 crt=31'39 mlcod 0'0 active pruub 1213.264404297s@ mbc={}] start_peering_interval up [3,2,1] -> [0,1,4], acting [3,2,1] -> [0,1,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:45 localhost ceph-osd[31330]: osd.2 pg_epoch: 51 pg[7.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=9.548993111s) [0,1,4] r=-1 lpr=51 pi=[43,51)/1 crt=31'39 mlcod 0'0 unknown NOTIFY pruub 1213.264404297s@ mbc={}] state: transitioning to Stray Oct 14 04:12:45 localhost ceph-osd[31330]: osd.2 pg_epoch: 51 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51 pruub=9.549178123s) [0,1,4] r=-1 lpr=51 pi=[43,51)/1 crt=31'39 mlcod 0'0 unknown NOTIFY pruub 1213.264526367s@ mbc={}] state: transitioning to Stray Oct 14 04:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 3946 writes, 18K keys, 3946 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 3946 writes, 279 syncs, 14.14 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 699 writes, 2689 keys, 699 commit groups, 1.0 writes per commit group, ingest: 1.34 MB, 0.00 MB/s#012Interval WAL: 699 writes, 140 syncs, 4.99 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 14 04:12:46 localhost ceph-osd[32282]: osd.4 pg_epoch: 51 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51) [0,1,4] r=2 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:46 localhost ceph-osd[32282]: osd.4 pg_epoch: 51 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=43/43 les/c/f=44/46/0 sis=51) [0,1,4] r=2 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:47 localhost ceph-osd[32282]: osd.4 pg_epoch: 53 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.217965126s) [1,4,3] r=1 lpr=53 pi=[45,53)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1210.364868164s@ mbc={}] start_peering_interval up [3,1,4] -> [1,4,3], acting [3,1,4] -> [1,4,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:47 localhost ceph-osd[32282]: osd.4 pg_epoch: 53 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.217554092s) [1,4,3] r=1 lpr=53 pi=[45,53)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1210.364868164s@ mbc={}] state: transitioning to Stray Oct 14 04:12:47 localhost ceph-osd[32282]: osd.4 pg_epoch: 53 pg[7.7( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.209458351s) [1,4,3] r=1 lpr=53 pi=[45,53)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1210.357788086s@ mbc={}] start_peering_interval up [3,1,4] -> [1,4,3], acting [3,1,4] -> [1,4,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:47 localhost ceph-osd[32282]: osd.4 pg_epoch: 53 pg[7.7( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.209011078s) [1,4,3] r=1 lpr=53 pi=[45,53)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1210.357788086s@ mbc={}] state: transitioning to Stray Oct 14 04:12:48 localhost python3[56854]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:12:48 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.f scrub starts Oct 14 04:12:48 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.9 scrub starts Oct 14 04:12:48 localhost python3[56899]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429567.862816-93140-26095477132455/source _original_basename=tmpnpjsaw98 follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:48 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.f scrub ok Oct 14 04:12:49 localhost python3[56961]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:12:50 localhost python3[57004]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429569.5089982-93265-209975834825932/source _original_basename=tmpwf3u9m7k follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:12:50 localhost ceph-osd[32282]: osd.4 pg_epoch: 55 pg[7.8( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=11.915827751s) [3,1,2] r=-1 lpr=55 pi=[39,55)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1216.147705078s@ mbc={}] start_peering_interval up [5,4,3] -> [3,1,2], acting [5,4,3] -> [3,1,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:50 localhost ceph-osd[32282]: osd.4 pg_epoch: 55 pg[7.8( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=11.915404320s) [3,1,2] r=-1 lpr=55 pi=[39,55)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.147705078s@ mbc={}] state: transitioning to Stray Oct 14 04:12:50 localhost python3[57034]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Oct 14 04:12:51 localhost python3[57052]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:12:51 localhost ceph-osd[32282]: osd.4 pg_epoch: 56 pg[7.9( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=56 pruub=10.889058113s) [0,4,5] r=1 lpr=56 pi=[39,56)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1216.148315430s@ mbc={}] start_peering_interval up [5,4,3] -> [0,4,5], acting [5,4,3] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:51 localhost ceph-osd[32282]: osd.4 pg_epoch: 56 pg[7.9( v 31'39 (0'0,31'39] local-lis/les=39/40 n=1 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=56 pruub=10.888526917s) [0,4,5] r=1 lpr=56 pi=[39,56)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.148315430s@ mbc={}] state: transitioning to Stray Oct 14 04:12:51 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts Oct 14 04:12:51 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok Oct 14 04:12:51 localhost ceph-osd[31330]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=39/39 les/c/f=40/40/0 sis=55) [3,1,2] r=2 lpr=55 pi=[39,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:12:52 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts Oct 14 04:12:52 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok Oct 14 04:12:52 localhost ansible-async_wrapper.py[57224]: Invoked with 149409953105 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429572.4101663-93377-167771251068870/AnsiballZ_command.py _ Oct 14 04:12:52 localhost ansible-async_wrapper.py[57227]: Starting module and watcher Oct 14 04:12:52 localhost ansible-async_wrapper.py[57227]: Start watching 57228 (3600) Oct 14 04:12:52 localhost ansible-async_wrapper.py[57228]: Start module (57228) Oct 14 04:12:52 localhost ansible-async_wrapper.py[57224]: Return async_wrapper task started. Oct 14 04:12:53 localhost python3[57248]: ansible-ansible.legacy.async_status Invoked with jid=149409953105.57224 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:12:53 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts Oct 14 04:12:53 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok Oct 14 04:12:54 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.c scrub starts Oct 14 04:12:54 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.c scrub ok Oct 14 04:12:54 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.8 scrub starts Oct 14 04:12:54 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.8 scrub ok Oct 14 04:12:56 localhost puppet-user[57246]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:12:56 localhost puppet-user[57246]: (file: /etc/puppet/hiera.yaml) Oct 14 04:12:56 localhost puppet-user[57246]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:12:56 localhost puppet-user[57246]: (file & line not available) Oct 14 04:12:56 localhost puppet-user[57246]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:12:56 localhost puppet-user[57246]: (file & line not available) Oct 14 04:12:56 localhost puppet-user[57246]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 14 04:12:57 localhost puppet-user[57246]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 14 04:12:57 localhost puppet-user[57246]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.12 seconds Oct 14 04:12:57 localhost puppet-user[57246]: Notice: Applied catalog in 0.03 seconds Oct 14 04:12:57 localhost puppet-user[57246]: Application: Oct 14 04:12:57 localhost puppet-user[57246]: Initial environment: production Oct 14 04:12:57 localhost puppet-user[57246]: Converged environment: production Oct 14 04:12:57 localhost puppet-user[57246]: Run mode: user Oct 14 04:12:57 localhost puppet-user[57246]: Changes: Oct 14 04:12:57 localhost puppet-user[57246]: Events: Oct 14 04:12:57 localhost puppet-user[57246]: Resources: Oct 14 04:12:57 localhost puppet-user[57246]: Total: 10 Oct 14 04:12:57 localhost puppet-user[57246]: Time: Oct 14 04:12:57 localhost puppet-user[57246]: Schedule: 0.00 Oct 14 04:12:57 localhost puppet-user[57246]: File: 0.00 Oct 14 04:12:57 localhost puppet-user[57246]: Exec: 0.01 Oct 14 04:12:57 localhost puppet-user[57246]: Augeas: 0.01 Oct 14 04:12:57 localhost puppet-user[57246]: Transaction evaluation: 0.02 Oct 14 04:12:57 localhost puppet-user[57246]: Catalog application: 0.03 Oct 14 04:12:57 localhost puppet-user[57246]: Config retrieval: 0.15 Oct 14 04:12:57 localhost puppet-user[57246]: Last run: 1760429577 Oct 14 04:12:57 localhost puppet-user[57246]: Filebucket: 0.00 Oct 14 04:12:57 localhost puppet-user[57246]: Total: 0.04 Oct 14 04:12:57 localhost puppet-user[57246]: Version: Oct 14 04:12:57 localhost puppet-user[57246]: Config: 1760429576 Oct 14 04:12:57 localhost puppet-user[57246]: Puppet: 7.10.0 Oct 14 04:12:57 localhost ansible-async_wrapper.py[57228]: Module complete (57228) Oct 14 04:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:12:57 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.13 scrub starts Oct 14 04:12:57 localhost systemd[1]: tmp-crun.Mb1Gag.mount: Deactivated successfully. Oct 14 04:12:57 localhost podman[57360]: 2025-10-14 08:12:57.556386074 +0000 UTC m=+0.095429830 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:12:57 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.13 scrub ok Oct 14 04:12:57 localhost podman[57360]: 2025-10-14 08:12:57.749121633 +0000 UTC m=+0.288165399 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:12:57 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:12:57 localhost ansible-async_wrapper.py[57227]: Done in kid B. Oct 14 04:12:58 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.e scrub starts Oct 14 04:12:58 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.e scrub ok Oct 14 04:12:59 localhost ceph-osd[31330]: osd.2 pg_epoch: 58 pg[7.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=39/29 lis/c=43/43 les/c/f=44/44/0 sis=58 pruub=11.665218353s) [4,0,5] r=-1 lpr=58 pi=[43,58)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1229.261840820s@ mbc={}] start_peering_interval up [3,2,1] -> [4,0,5], acting [3,2,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:12:59 localhost ceph-osd[31330]: osd.2 pg_epoch: 58 pg[7.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=39/29 lis/c=43/43 les/c/f=44/44/0 sis=58 pruub=11.663910866s) [4,0,5] r=-1 lpr=58 pi=[43,58)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1229.261840820s@ mbc={}] state: transitioning to Stray Oct 14 04:12:59 localhost ceph-osd[32282]: osd.4 pg_epoch: 58 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=43/43 les/c/f=44/44/0 sis=58) [4,0,5] r=0 lpr=58 pi=[43,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 14 04:13:00 localhost ceph-osd[32282]: osd.4 pg_epoch: 59 pg[7.a( v 31'39 (0'0,31'39] local-lis/les=58/59 n=1 ec=39/29 lis/c=43/43 les/c/f=44/44/0 sis=58) [4,0,5] r=0 lpr=58 pi=[43,58)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 14 04:13:01 localhost ceph-osd[32282]: osd.4 pg_epoch: 60 pg[7.b( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=11.224602699s) [3,1,2] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1226.365722656s@ mbc={}] start_peering_interval up [3,1,4] -> [3,1,2], acting [3,1,4] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:13:01 localhost ceph-osd[32282]: osd.4 pg_epoch: 60 pg[7.b( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=11.224494934s) [3,1,2] r=-1 lpr=60 pi=[45,60)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1226.365722656s@ mbc={}] state: transitioning to Stray Oct 14 04:13:02 localhost ceph-osd[31330]: osd.2 pg_epoch: 60 pg[7.b( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=45/45 les/c/f=46/46/0 sis=60) [3,1,2] r=2 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:13:02 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.10 scrub starts Oct 14 04:13:02 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.10 scrub ok Oct 14 04:13:03 localhost ceph-osd[31330]: osd.2 pg_epoch: 62 pg[7.c( v 31'39 (0'0,31'39] local-lis/les=47/48 n=1 ec=39/29 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=14.762930870s) [1,3,4] r=-1 lpr=62 pi=[47,62)/1 luod=0'0 crt=31'39 mlcod 0'0 active pruub 1236.549560547s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,4], acting [0,1,2] -> [1,3,4], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:13:03 localhost ceph-osd[31330]: osd.2 pg_epoch: 62 pg[7.c( v 31'39 (0'0,31'39] local-lis/les=47/48 n=1 ec=39/29 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=14.762842178s) [1,3,4] r=-1 lpr=62 pi=[47,62)/1 crt=31'39 mlcod 0'0 unknown NOTIFY pruub 1236.549560547s@ mbc={}] state: transitioning to Stray Oct 14 04:13:03 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.15 scrub starts Oct 14 04:13:03 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.15 scrub ok Oct 14 04:13:03 localhost python3[57531]: ansible-ansible.legacy.async_status Invoked with jid=149409953105.57224 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:13:04 localhost python3[57547]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:13:04 localhost ceph-osd[32282]: osd.4 pg_epoch: 62 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=47/47 les/c/f=48/48/0 sis=62) [1,3,4] r=2 lpr=62 pi=[47,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:13:04 localhost python3[57563]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:13:05 localhost python3[57613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:05 localhost ceph-osd[32282]: osd.4 pg_epoch: 64 pg[7.d( v 31'39 (0'0,31'39] local-lis/les=49/50 n=1 ec=39/29 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.253210068s) [1,3,2] r=-1 lpr=64 pi=[49,64)/1 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1230.497802734s@ mbc={255={}}] start_peering_interval up [4,0,1] -> [1,3,2], acting [4,0,1] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:13:05 localhost ceph-osd[32282]: osd.4 pg_epoch: 64 pg[7.d( v 31'39 (0'0,31'39] local-lis/les=49/50 n=1 ec=39/29 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.253104210s) [1,3,2] r=-1 lpr=64 pi=[49,64)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1230.497802734s@ mbc={}] state: transitioning to Stray Oct 14 04:13:05 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.14 scrub starts Oct 14 04:13:05 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.0 scrub starts Oct 14 04:13:05 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.14 scrub ok Oct 14 04:13:05 localhost python3[57631]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpw7p2n6cu recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:13:05 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.0 scrub ok Oct 14 04:13:05 localhost python3[57661]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:06 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.9 scrub starts Oct 14 04:13:06 localhost ceph-osd[31330]: osd.2 pg_epoch: 64 pg[7.d( empty local-lis/les=0/0 n=0 ec=39/29 lis/c=49/49 les/c/f=50/50/0 sis=64) [1,3,2] r=2 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 14 04:13:06 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.9 scrub ok Oct 14 04:13:07 localhost python3[57764]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 14 04:13:07 localhost ceph-osd[32282]: osd.4 pg_epoch: 66 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=51/52 n=1 ec=39/29 lis/c=51/51 les/c/f=52/52/0 sis=66 pruub=11.039235115s) [3,4,1] r=1 lpr=66 pi=[51,66)/1 luod=0'0 crt=31'39 mlcod 0'0 active pruub 1232.332153320s@ mbc={}] start_peering_interval up [0,1,4] -> [3,4,1], acting [0,1,4] -> [3,4,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:13:07 localhost ceph-osd[32282]: osd.4 pg_epoch: 66 pg[7.e( v 31'39 (0'0,31'39] local-lis/les=51/52 n=1 ec=39/29 lis/c=51/51 les/c/f=52/52/0 sis=66 pruub=11.039037704s) [3,4,1] r=1 lpr=66 pi=[51,66)/1 crt=31'39 mlcod 0'0 unknown NOTIFY pruub 1232.332153320s@ mbc={}] state: transitioning to Stray Oct 14 04:13:07 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.1f scrub starts Oct 14 04:13:07 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 4.1f scrub ok Oct 14 04:13:07 localhost python3[57783]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:08 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.12 scrub starts Oct 14 04:13:09 localhost python3[57815]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:13:09 localhost python3[57865]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:09 localhost ceph-osd[32282]: osd.4 pg_epoch: 68 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=53/54 n=1 ec=39/29 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=10.651444435s) [0,4,5] r=1 lpr=68 pi=[53,68)/1 luod=0'0 crt=31'39 lcod 0'0 mlcod 0'0 active pruub 1234.200439453s@ mbc={}] start_peering_interval up [1,4,3] -> [0,4,5], acting [1,4,3] -> [0,4,5], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 14 04:13:09 localhost ceph-osd[32282]: osd.4 pg_epoch: 68 pg[7.f( v 31'39 (0'0,31'39] local-lis/les=53/54 n=1 ec=39/29 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=10.651247025s) [0,4,5] r=1 lpr=68 pi=[53,68)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1234.200439453s@ mbc={}] state: transitioning to Stray Oct 14 04:13:09 localhost python3[57883]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:10 localhost python3[57945]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:10 localhost python3[57963]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:11 localhost python3[58025]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:11 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts Oct 14 04:13:11 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok Oct 14 04:13:11 localhost python3[58043]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:12 localhost python3[58105]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:12 localhost python3[58123]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:12 localhost python3[58153]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:13:12 localhost systemd[1]: Reloading. Oct 14 04:13:13 localhost systemd-rc-local-generator[58179]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:13:13 localhost systemd-sysv-generator[58182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:13:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:13:13 localhost python3[58239]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:14 localhost python3[58257]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:14 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.15 scrub starts Oct 14 04:13:14 localhost python3[58319]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:13:14 localhost python3[58337]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:15 localhost python3[58367]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:13:15 localhost systemd[1]: Reloading. Oct 14 04:13:15 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.0 scrub starts Oct 14 04:13:15 localhost systemd-rc-local-generator[58387]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:13:15 localhost systemd-sysv-generator[58391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:13:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:13:15 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.0 scrub ok Oct 14 04:13:15 localhost systemd[1]: Starting Create netns directory... Oct 14 04:13:15 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 04:13:15 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 04:13:15 localhost systemd[1]: Finished Create netns directory. Oct 14 04:13:16 localhost python3[58424]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:13:17 localhost python3[58480]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:13:18 localhost podman[58553]: 2025-10-14 08:13:18.182002731 +0000 UTC m=+0.087120903 container create 738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_virtqemud_init_logs, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step2, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Oct 14 04:13:18 localhost podman[58560]: 2025-10-14 08:13:18.207004332 +0000 UTC m=+0.090718432 container create 5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, container_name=nova_compute_init_log, vcs-type=git, config_id=tripleo_step2, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=) Oct 14 04:13:18 localhost podman[58553]: 2025-10-14 08:13:18.138069525 +0000 UTC m=+0.043187737 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:13:18 localhost systemd[1]: Started libpod-conmon-738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e.scope. Oct 14 04:13:18 localhost systemd[1]: Started libpod-conmon-5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3.scope. Oct 14 04:13:18 localhost podman[58560]: 2025-10-14 08:13:18.152795585 +0000 UTC m=+0.036509705 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:13:18 localhost systemd[1]: Started libcrun container. Oct 14 04:13:18 localhost systemd[1]: Started libcrun container. Oct 14 04:13:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496ac8ae1b781159d9732cba668aefff9d4a69111a9ec162f48ec47befb2b47b/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:13:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30fc8906bf3c4dddee8af1b0fb71de2370697abfd1d45bc721251a95c39f5658/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 14 04:13:18 localhost podman[58560]: 2025-10-14 08:13:18.285513011 +0000 UTC m=+0.169227101 container init 5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute_init_log, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step2, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:13:18 localhost podman[58560]: 2025-10-14 08:13:18.297024234 +0000 UTC m=+0.180738304 container start 5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step2, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc.) Oct 14 04:13:18 localhost python3[58480]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Oct 14 04:13:18 localhost systemd[1]: libpod-5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3.scope: Deactivated successfully. Oct 14 04:13:18 localhost podman[58553]: 2025-10-14 08:13:18.334854094 +0000 UTC m=+0.239972256 container init 738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=2, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 14 04:13:18 localhost podman[58553]: 2025-10-14 08:13:18.344805895 +0000 UTC m=+0.249924057 container start 738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, tcib_managed=true, container_name=nova_virtqemud_init_logs, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 04:13:18 localhost python3[58480]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Oct 14 04:13:18 localhost systemd[1]: libpod-738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e.scope: Deactivated successfully. Oct 14 04:13:18 localhost podman[58592]: 2025-10-14 08:13:18.369024185 +0000 UTC m=+0.051198926 container died 5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, container_name=nova_compute_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37) Oct 14 04:13:18 localhost podman[58592]: 2025-10-14 08:13:18.399692951 +0000 UTC m=+0.081867652 container cleanup 5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, config_id=tripleo_step2, container_name=nova_compute_init_log, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, version=17.1.9) Oct 14 04:13:18 localhost systemd[1]: libpod-conmon-5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3.scope: Deactivated successfully. Oct 14 04:13:18 localhost podman[58617]: 2025-10-14 08:13:18.4316232 +0000 UTC m=+0.063966464 container died 738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, version=17.1.9, release=2, io.buildah.version=1.33.12, config_id=tripleo_step2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:59, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 14 04:13:18 localhost podman[58617]: 2025-10-14 08:13:18.580360221 +0000 UTC m=+0.212703445 container cleanup 738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=2, architecture=x86_64, config_id=tripleo_step2, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc.) Oct 14 04:13:18 localhost systemd[1]: libpod-conmon-738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e.scope: Deactivated successfully. Oct 14 04:13:18 localhost podman[58746]: 2025-10-14 08:13:18.937782926 +0000 UTC m=+0.072891607 container create bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=2, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:13:18 localhost podman[58745]: 2025-10-14 08:13:18.953595466 +0000 UTC m=+0.089218801 container create 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-type=git, container_name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:13:18 localhost systemd[1]: Started libpod-conmon-bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5.scope. Oct 14 04:13:18 localhost systemd[1]: Started libpod-conmon-5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201.scope. Oct 14 04:13:18 localhost systemd[1]: Started libcrun container. Oct 14 04:13:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 14 04:13:18 localhost systemd[1]: Started libcrun container. Oct 14 04:13:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 04:13:18 localhost podman[58746]: 2025-10-14 08:13:18.898175837 +0000 UTC m=+0.033284528 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:13:19 localhost podman[58746]: 2025-10-14 08:13:18.999830015 +0000 UTC m=+0.134938696 container init bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 14 04:13:19 localhost podman[58745]: 2025-10-14 08:13:18.906125163 +0000 UTC m=+0.041748488 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 14 04:13:19 localhost podman[58745]: 2025-10-14 08:13:19.006202199 +0000 UTC m=+0.141825524 container init 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=create_haproxy_wrapper, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step2, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, maintainer=OpenStack TripleO Team) Oct 14 04:13:19 localhost podman[58745]: 2025-10-14 08:13:19.011963315 +0000 UTC m=+0.147586650 container start 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step2) Oct 14 04:13:19 localhost podman[58745]: 2025-10-14 08:13:19.012616253 +0000 UTC m=+0.148239598 container attach 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=create_haproxy_wrapper, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git) Oct 14 04:13:19 localhost podman[58746]: 2025-10-14 08:13:19.058039871 +0000 UTC m=+0.193148582 container start bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:56:59) Oct 14 04:13:19 localhost podman[58746]: 2025-10-14 08:13:19.058696709 +0000 UTC m=+0.193805460 container attach bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, release=2, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=create_virtlogd_wrapper, architecture=x86_64) Oct 14 04:13:19 localhost systemd[1]: var-lib-containers-storage-overlay-496ac8ae1b781159d9732cba668aefff9d4a69111a9ec162f48ec47befb2b47b-merged.mount: Deactivated successfully. Oct 14 04:13:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5eab9bdd8a7efd3361227e625725bff515fee4b9fc6d934b4f92205cd5b284b3-userdata-shm.mount: Deactivated successfully. Oct 14 04:13:19 localhost systemd[1]: var-lib-containers-storage-overlay-30fc8906bf3c4dddee8af1b0fb71de2370697abfd1d45bc721251a95c39f5658-merged.mount: Deactivated successfully. Oct 14 04:13:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-738f931016ef0e10c1be0d92862c160f247ec68dafbc496acec6d9f610d80f5e-userdata-shm.mount: Deactivated successfully. Oct 14 04:13:20 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.1a scrub starts Oct 14 04:13:20 localhost ovs-vsctl[58848]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 14 04:13:20 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.1a scrub ok Oct 14 04:13:21 localhost systemd[1]: libpod-bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5.scope: Deactivated successfully. Oct 14 04:13:21 localhost systemd[1]: libpod-bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5.scope: Consumed 2.108s CPU time. Oct 14 04:13:21 localhost podman[58746]: 2025-10-14 08:13:21.118267302 +0000 UTC m=+2.253376013 container died bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., container_name=create_virtlogd_wrapper, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:56:59, release=2) Oct 14 04:13:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5-userdata-shm.mount: Deactivated successfully. Oct 14 04:13:21 localhost systemd[1]: var-lib-containers-storage-overlay-1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d-merged.mount: Deactivated successfully. Oct 14 04:13:21 localhost podman[58997]: 2025-10-14 08:13:21.222777068 +0000 UTC m=+0.092671025 container cleanup bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, container_name=create_virtlogd_wrapper, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1) Oct 14 04:13:21 localhost systemd[1]: libpod-conmon-bf802da899e59cea3e199c444f0b3400b64951a9b6d3ebcf44efb10959d4f4e5.scope: Deactivated successfully. Oct 14 04:13:21 localhost python3[58480]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Oct 14 04:13:21 localhost systemd[1]: libpod-5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201.scope: Deactivated successfully. Oct 14 04:13:21 localhost systemd[1]: libpod-5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201.scope: Consumed 2.143s CPU time. Oct 14 04:13:21 localhost podman[58745]: 2025-10-14 08:13:21.947107976 +0000 UTC m=+3.082731341 container died 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, build-date=2025-07-21T16:28:53, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=create_haproxy_wrapper, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public) Oct 14 04:13:22 localhost podman[59037]: 2025-10-14 08:13:22.022501539 +0000 UTC m=+0.068223340 container cleanup 5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=create_haproxy_wrapper, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:13:22 localhost systemd[1]: libpod-conmon-5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201.scope: Deactivated successfully. Oct 14 04:13:22 localhost python3[58480]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Oct 14 04:13:22 localhost systemd[1]: var-lib-containers-storage-overlay-8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7-merged.mount: Deactivated successfully. Oct 14 04:13:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d61b1c404c31e7c54632a4b8933eb375e0ffdb05ec10e9840a5ecc2f461e201-userdata-shm.mount: Deactivated successfully. Oct 14 04:13:22 localhost python3[59095]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:22 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.4 scrub starts Oct 14 04:13:22 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.4 scrub ok Oct 14 04:13:23 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.1a scrub starts Oct 14 04:13:23 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.1a scrub ok Oct 14 04:13:24 localhost python3[59216]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005486731 step=2 update_config_hash_only=False Oct 14 04:13:24 localhost python3[59232]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:13:25 localhost python3[59248]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:13:26 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.8 scrub starts Oct 14 04:13:26 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.8 scrub ok Oct 14 04:13:27 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.e scrub starts Oct 14 04:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:13:28 localhost podman[59249]: 2025-10-14 08:13:28.571595097 +0000 UTC m=+0.104580560 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 04:13:28 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.12 scrub starts Oct 14 04:13:28 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 5.12 scrub ok Oct 14 04:13:28 localhost podman[59249]: 2025-10-14 08:13:28.789155672 +0000 UTC m=+0.322141145 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team) Oct 14 04:13:28 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:13:30 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.1c scrub starts Oct 14 04:13:30 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.1c scrub ok Oct 14 04:13:33 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.1d scrub starts Oct 14 04:13:33 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.1d scrub ok Oct 14 04:13:35 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.1b scrub starts Oct 14 04:13:35 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.1b scrub ok Oct 14 04:13:37 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.11 scrub starts Oct 14 04:13:37 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.11 scrub ok Oct 14 04:13:40 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.1d scrub starts Oct 14 04:13:43 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 7.5 scrub starts Oct 14 04:13:43 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 7.5 scrub ok Oct 14 04:13:45 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.8 scrub starts Oct 14 04:13:45 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.8 scrub ok Oct 14 04:13:46 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 7.a scrub starts Oct 14 04:13:46 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 7.a scrub ok Oct 14 04:13:48 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 6.1b scrub starts Oct 14 04:13:48 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 6.1b scrub ok Oct 14 04:13:50 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1d scrub starts Oct 14 04:13:51 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 6.1 scrub starts Oct 14 04:13:51 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 6.1 scrub ok Oct 14 04:13:57 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1c scrub starts Oct 14 04:13:57 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1c scrub ok Oct 14 04:13:58 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts Oct 14 04:13:58 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok Oct 14 04:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:13:59 localhost podman[59279]: 2025-10-14 08:13:59.548159966 +0000 UTC m=+0.082988438 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, version=17.1.9, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.) Oct 14 04:13:59 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.9 scrub starts Oct 14 04:13:59 localhost podman[59279]: 2025-10-14 08:13:59.783415673 +0000 UTC m=+0.318244175 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git) Oct 14 04:13:59 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:14:00 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 3.9 scrub ok Oct 14 04:14:01 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.19 scrub starts Oct 14 04:14:01 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.19 scrub ok Oct 14 04:14:02 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.3 scrub starts Oct 14 04:14:02 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.3 scrub ok Oct 14 04:14:07 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts Oct 14 04:14:07 localhost ceph-osd[32282]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok Oct 14 04:14:12 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.6 scrub starts Oct 14 04:14:12 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.6 scrub ok Oct 14 04:14:14 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.d scrub starts Oct 14 04:14:14 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.d scrub ok Oct 14 04:14:15 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.b scrub starts Oct 14 04:14:15 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 5.b scrub ok Oct 14 04:14:21 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.1b scrub starts Oct 14 04:14:21 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.1b scrub ok Oct 14 04:14:23 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.12 scrub starts Oct 14 04:14:23 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 2.12 scrub ok Oct 14 04:14:24 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts Oct 14 04:14:24 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok Oct 14 04:14:27 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.e scrub starts Oct 14 04:14:27 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 3.e scrub ok Oct 14 04:14:29 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.1d scrub starts Oct 14 04:14:29 localhost ceph-osd[31330]: log_channel(cluster) log [DBG] : 4.1d scrub ok Oct 14 04:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:14:30 localhost systemd[1]: tmp-crun.4JJXAe.mount: Deactivated successfully. Oct 14 04:14:30 localhost podman[59384]: 2025-10-14 08:14:30.543895487 +0000 UTC m=+0.081134219 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.9, build-date=2025-07-21T13:07:59, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:14:30 localhost podman[59384]: 2025-10-14 08:14:30.766120216 +0000 UTC m=+0.303358928 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible) Oct 14 04:14:30 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:15:01 localhost podman[59413]: 2025-10-14 08:15:01.540966841 +0000 UTC m=+0.082031304 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, config_id=tripleo_step1, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:15:01 localhost podman[59413]: 2025-10-14 08:15:01.766101638 +0000 UTC m=+0.307166081 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 14 04:15:01 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:15:03 localhost podman[59543]: 2025-10-14 08:15:03.109226174 +0000 UTC m=+0.090461569 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Oct 14 04:15:03 localhost podman[59543]: 2025-10-14 08:15:03.240185244 +0000 UTC m=+0.221420649 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph) Oct 14 04:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:15:32 localhost podman[59686]: 2025-10-14 08:15:32.549652534 +0000 UTC m=+0.087170451 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:15:32 localhost podman[59686]: 2025-10-14 08:15:32.73509724 +0000 UTC m=+0.272615147 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9) Oct 14 04:15:32 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:16:03 localhost podman[59716]: 2025-10-14 08:16:03.53492193 +0000 UTC m=+0.076842769 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, container_name=metrics_qdr) Oct 14 04:16:03 localhost podman[59716]: 2025-10-14 08:16:03.757158675 +0000 UTC m=+0.299079474 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=metrics_qdr, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 14 04:16:03 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:16:34 localhost podman[59821]: 2025-10-14 08:16:34.547635845 +0000 UTC m=+0.080849195 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1) Oct 14 04:16:34 localhost podman[59821]: 2025-10-14 08:16:34.798008815 +0000 UTC m=+0.331222105 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:16:34 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:17:05 localhost systemd[1]: tmp-crun.eiGHYU.mount: Deactivated successfully. Oct 14 04:17:05 localhost podman[59852]: 2025-10-14 08:17:05.546351846 +0000 UTC m=+0.084570774 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 14 04:17:05 localhost podman[59852]: 2025-10-14 08:17:05.769594257 +0000 UTC m=+0.307813185 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, config_id=tripleo_step1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:17:05 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:17:36 localhost podman[59959]: 2025-10-14 08:17:36.537295466 +0000 UTC m=+0.078287248 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, container_name=metrics_qdr, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:17:36 localhost podman[59959]: 2025-10-14 08:17:36.72296737 +0000 UTC m=+0.263959102 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, container_name=metrics_qdr, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:17:36 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:17:53 localhost python3[60038]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:17:54 localhost python3[60083]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429873.3504891-99452-61321571156816/source _original_basename=tmpjm61pbt2 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:17:54 localhost python3[60113]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:17:56 localhost ansible-async_wrapper.py[60285]: Invoked with 162416101016 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429876.0036223-99609-94094036482388/AnsiballZ_command.py _ Oct 14 04:17:56 localhost ansible-async_wrapper.py[60288]: Starting module and watcher Oct 14 04:17:56 localhost ansible-async_wrapper.py[60288]: Start watching 60289 (3600) Oct 14 04:17:56 localhost ansible-async_wrapper.py[60289]: Start module (60289) Oct 14 04:17:56 localhost ansible-async_wrapper.py[60285]: Return async_wrapper task started. Oct 14 04:17:56 localhost python3[60309]: ansible-ansible.legacy.async_status Invoked with jid=162416101016.60285 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:18:00 localhost puppet-user[60307]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:18:00 localhost puppet-user[60307]: (file: /etc/puppet/hiera.yaml) Oct 14 04:18:00 localhost puppet-user[60307]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:18:00 localhost puppet-user[60307]: (file & line not available) Oct 14 04:18:00 localhost puppet-user[60307]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:18:00 localhost puppet-user[60307]: (file & line not available) Oct 14 04:18:00 localhost puppet-user[60307]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 14 04:18:00 localhost puppet-user[60307]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 14 04:18:00 localhost puppet-user[60307]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.11 seconds Oct 14 04:18:00 localhost puppet-user[60307]: Notice: Applied catalog in 0.03 seconds Oct 14 04:18:00 localhost puppet-user[60307]: Application: Oct 14 04:18:00 localhost puppet-user[60307]: Initial environment: production Oct 14 04:18:00 localhost puppet-user[60307]: Converged environment: production Oct 14 04:18:00 localhost puppet-user[60307]: Run mode: user Oct 14 04:18:00 localhost puppet-user[60307]: Changes: Oct 14 04:18:00 localhost puppet-user[60307]: Events: Oct 14 04:18:00 localhost puppet-user[60307]: Resources: Oct 14 04:18:00 localhost puppet-user[60307]: Total: 10 Oct 14 04:18:00 localhost puppet-user[60307]: Time: Oct 14 04:18:00 localhost puppet-user[60307]: Schedule: 0.00 Oct 14 04:18:00 localhost puppet-user[60307]: File: 0.00 Oct 14 04:18:00 localhost puppet-user[60307]: Exec: 0.01 Oct 14 04:18:00 localhost puppet-user[60307]: Augeas: 0.01 Oct 14 04:18:00 localhost puppet-user[60307]: Transaction evaluation: 0.03 Oct 14 04:18:00 localhost puppet-user[60307]: Catalog application: 0.03 Oct 14 04:18:00 localhost puppet-user[60307]: Config retrieval: 0.15 Oct 14 04:18:00 localhost puppet-user[60307]: Last run: 1760429880 Oct 14 04:18:00 localhost puppet-user[60307]: Filebucket: 0.00 Oct 14 04:18:00 localhost puppet-user[60307]: Total: 0.04 Oct 14 04:18:00 localhost puppet-user[60307]: Version: Oct 14 04:18:00 localhost puppet-user[60307]: Config: 1760429880 Oct 14 04:18:00 localhost puppet-user[60307]: Puppet: 7.10.0 Oct 14 04:18:00 localhost ansible-async_wrapper.py[60289]: Module complete (60289) Oct 14 04:18:01 localhost ansible-async_wrapper.py[60288]: Done in kid B. Oct 14 04:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:18:07 localhost podman[60436]: 2025-10-14 08:18:07.160621954 +0000 UTC m=+0.093633975 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, architecture=x86_64) Oct 14 04:18:07 localhost python3[60435]: ansible-ansible.legacy.async_status Invoked with jid=162416101016.60285 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:18:07 localhost podman[60436]: 2025-10-14 08:18:07.382826338 +0000 UTC m=+0.315838259 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1) Oct 14 04:18:07 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:18:08 localhost python3[60510]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:18:08 localhost python3[60543]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:08 localhost python3[60607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:09 localhost python3[60640]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpwit90xo_ recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:18:09 localhost python3[60670]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:10 localhost python3[60773]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 14 04:18:11 localhost python3[60792]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:13 localhost python3[60824]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:13 localhost python3[60874]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:14 localhost python3[60892]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:14 localhost python3[60954]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:14 localhost python3[60972]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:15 localhost python3[61034]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:15 localhost python3[61052]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:16 localhost python3[61114]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:16 localhost python3[61132]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:17 localhost python3[61162]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:17 localhost systemd[1]: Reloading. Oct 14 04:18:17 localhost systemd-rc-local-generator[61183]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:17 localhost systemd-sysv-generator[61187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:18 localhost python3[61247]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:18 localhost python3[61265]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:18 localhost python3[61327]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:18:19 localhost python3[61345]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:19 localhost python3[61375]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:19 localhost systemd[1]: Reloading. Oct 14 04:18:19 localhost systemd-rc-local-generator[61398]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:19 localhost systemd-sysv-generator[61401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:19 localhost systemd[1]: Starting Create netns directory... Oct 14 04:18:19 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 04:18:19 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 04:18:19 localhost systemd[1]: Finished Create netns directory. Oct 14 04:18:20 localhost python3[61432]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:18:22 localhost python3[61491]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:18:22 localhost podman[61660]: 2025-10-14 08:18:22.824547729 +0000 UTC m=+0.075716069 container create 4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_init_log, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:18:22 localhost podman[61659]: 2025-10-14 08:18:22.830142767 +0000 UTC m=+0.081065261 container create c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=) Oct 14 04:18:22 localhost podman[61673]: 2025-10-14 08:18:22.863121442 +0000 UTC m=+0.102994463 container create 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-07-21T12:58:40, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, version=17.1.9, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.buildah.version=1.33.12, container_name=rsyslog, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Oct 14 04:18:22 localhost podman[61662]: 2025-10-14 08:18:22.875622283 +0000 UTC m=+0.116626473 container create decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, release=2, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_step3) Oct 14 04:18:22 localhost systemd[1]: Started libpod-conmon-c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.scope. Oct 14 04:18:22 localhost podman[61659]: 2025-10-14 08:18:22.7857514 +0000 UTC m=+0.036673914 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 14 04:18:22 localhost systemd[1]: Started libpod-conmon-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope. Oct 14 04:18:22 localhost podman[61661]: 2025-10-14 08:18:22.788128713 +0000 UTC m=+0.036779597 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:18:22 localhost podman[61660]: 2025-10-14 08:18:22.785937645 +0000 UTC m=+0.037105995 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 14 04:18:22 localhost podman[61661]: 2025-10-14 08:18:22.897193496 +0000 UTC m=+0.145844370 container create a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step3, container_name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:18:22 localhost podman[61673]: 2025-10-14 08:18:22.808326568 +0000 UTC m=+0.048199619 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 14 04:18:22 localhost systemd[1]: Started libcrun container. Oct 14 04:18:22 localhost systemd[1]: Started libcrun container. Oct 14 04:18:22 localhost systemd[1]: Started libpod-conmon-decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006.scope. Oct 14 04:18:22 localhost podman[61662]: 2025-10-14 08:18:22.814256575 +0000 UTC m=+0.055260785 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:22 localhost systemd[1]: Started libpod-conmon-4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6.scope. Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e/merged/scripts supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost systemd[1]: Started libcrun container. Oct 14 04:18:22 localhost podman[61673]: 2025-10-14 08:18:22.924849449 +0000 UTC m=+0.164722490 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, container_name=rsyslog, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-rsyslog-container, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost systemd[1]: Started libcrun container. Oct 14 04:18:22 localhost systemd[1]: Started libpod-conmon-a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca.scope. Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d529a5e87063d6d8be572e15ccc1a6e2cd4e03cf8d02224d51bfc8e004317f/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost podman[61662]: 2025-10-14 08:18:22.936775265 +0000 UTC m=+0.177779455 container init decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtlogd_wrapper, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step3, tcib_managed=true) Oct 14 04:18:22 localhost systemd[1]: Started libcrun container. Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost podman[61660]: 2025-10-14 08:18:22.940639927 +0000 UTC m=+0.191808297 container init 4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step3, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, build-date=2025-07-21T15:29:47, version=17.1.9, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.buildah.version=1.33.12) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:22 localhost podman[61662]: 2025-10-14 08:18:22.942753824 +0000 UTC m=+0.183758014 container start decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, release=2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper) Oct 14 04:18:22 localhost podman[61661]: 2025-10-14 08:18:22.946325149 +0000 UTC m=+0.194976003 container init a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_statedir_owner) Oct 14 04:18:22 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:22 localhost systemd[1]: libpod-4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6.scope: Deactivated successfully. Oct 14 04:18:22 localhost podman[61660]: 2025-10-14 08:18:22.954757233 +0000 UTC m=+0.205925573 container start 4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 04:18:22 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Oct 14 04:18:22 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:22 localhost podman[61673]: 2025-10-14 08:18:22.985618401 +0000 UTC m=+0.225491442 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, vendor=Red Hat, Inc., release=1) Oct 14 04:18:22 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 04:18:22 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 04:18:22 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4c9706ce89053601d63131b238721a51 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 14 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:18:23 localhost podman[61661]: 2025-10-14 08:18:23.005384365 +0000 UTC m=+0.254035229 container start a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, container_name=nova_statedir_owner, batch=17.1_20250721.1, architecture=x86_64, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:18:23 localhost podman[61661]: 2025-10-14 08:18:23.00630963 +0000 UTC m=+0.254960504 container attach a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 14 04:18:23 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 04:18:23 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 04:18:23 localhost podman[61659]: 2025-10-14 08:18:23.015845462 +0000 UTC m=+0.266768126 container init c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3) Oct 14 04:18:23 localhost systemd[1]: libpod-a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca.scope: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:23 localhost podman[61661]: 2025-10-14 08:18:23.023522736 +0000 UTC m=+0.272173590 container died a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true) Oct 14 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:18:23 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:23 localhost podman[61659]: 2025-10-14 08:18:23.050640446 +0000 UTC m=+0.301562950 container start c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, container_name=collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:18:23 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da9a0dc7b40588672419e3ce10063e21 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 14 04:18:23 localhost podman[61810]: 2025-10-14 08:18:23.111419497 +0000 UTC m=+0.073364817 container cleanup a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:18:23 localhost systemd[1]: libpod-conmon-a8ad44a7d11502def66ad6d6ef4a8387b50703535ff2dc4fda9daf2a0685e4ca.scope: Deactivated successfully. Oct 14 04:18:23 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Oct 14 04:18:23 localhost podman[61759]: 2025-10-14 08:18:23.130836242 +0000 UTC m=+0.164276467 container died 4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, container_name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:18:23 localhost podman[61786]: 2025-10-14 08:18:23.166801766 +0000 UTC m=+0.150246435 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:18:23 localhost systemd[61787]: Queued start job for default target Main User Target. Oct 14 04:18:23 localhost systemd[61787]: Created slice User Application Slice. Oct 14 04:18:23 localhost systemd[61787]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 04:18:23 localhost systemd[61787]: Started Daily Cleanup of User's Temporary Directories. Oct 14 04:18:23 localhost systemd[61787]: Reached target Paths. Oct 14 04:18:23 localhost systemd[61787]: Reached target Timers. Oct 14 04:18:23 localhost systemd[61787]: Starting D-Bus User Message Bus Socket... Oct 14 04:18:23 localhost systemd[61787]: Starting Create User's Volatile Files and Directories... Oct 14 04:18:23 localhost systemd[61787]: Finished Create User's Volatile Files and Directories. Oct 14 04:18:23 localhost systemd[61787]: Listening on D-Bus User Message Bus Socket. Oct 14 04:18:23 localhost systemd[61787]: Reached target Sockets. Oct 14 04:18:23 localhost systemd[61787]: Reached target Basic System. Oct 14 04:18:23 localhost systemd[61787]: Reached target Main User Target. Oct 14 04:18:23 localhost systemd[61787]: Startup finished in 127ms. Oct 14 04:18:23 localhost systemd[1]: Started User Manager for UID 0. Oct 14 04:18:23 localhost systemd[1]: Started Session c1 of User root. Oct 14 04:18:23 localhost systemd[1]: Started Session c2 of User root. Oct 14 04:18:23 localhost podman[61759]: 2025-10-14 08:18:23.212127918 +0000 UTC m=+0.245568123 container cleanup 4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, container_name=ceilometer_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step3, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 14 04:18:23 localhost podman[61811]: 2025-10-14 08:18:23.216444673 +0000 UTC m=+0.177334944 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T12:58:40, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 04:18:23 localhost systemd[1]: libpod-conmon-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: libpod-conmon-4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6.scope: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: session-c2.scope: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: session-c1.scope: Deactivated successfully. Oct 14 04:18:23 localhost podman[61822]: 2025-10-14 08:18:23.371153496 +0000 UTC m=+0.317949323 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=2, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:04:03, container_name=collectd, vendor=Red Hat, Inc.) Oct 14 04:18:23 localhost podman[61822]: 2025-10-14 08:18:23.410082839 +0000 UTC m=+0.356878666 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03) Oct 14 04:18:23 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:18:23 localhost podman[62042]: 2025-10-14 08:18:23.572804564 +0000 UTC m=+0.077901397 container create 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2) Oct 14 04:18:23 localhost systemd[1]: Started libpod-conmon-7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58.scope. Oct 14 04:18:23 localhost podman[62042]: 2025-10-14 08:18:23.533014929 +0000 UTC m=+0.038111812 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:23 localhost systemd[1]: Started libcrun container. Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea6eba3b41452cab8e715ebf0cbb227001a53fa044ec7fc4361e175f631660e/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea6eba3b41452cab8e715ebf0cbb227001a53fa044ec7fc4361e175f631660e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea6eba3b41452cab8e715ebf0cbb227001a53fa044ec7fc4361e175f631660e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea6eba3b41452cab8e715ebf0cbb227001a53fa044ec7fc4361e175f631660e/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost podman[62042]: 2025-10-14 08:18:23.645849541 +0000 UTC m=+0.150946414 container init 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=2, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 04:18:23 localhost podman[62042]: 2025-10-14 08:18:23.66011674 +0000 UTC m=+0.165213633 container start 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true) Oct 14 04:18:23 localhost podman[62070]: 2025-10-14 08:18:23.680951893 +0000 UTC m=+0.081524093 container create b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-07-21T14:56:59, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12) Oct 14 04:18:23 localhost systemd[1]: Started libpod-conmon-b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433.scope. Oct 14 04:18:23 localhost systemd[1]: Started libcrun container. Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost podman[62070]: 2025-10-14 08:18:23.639249097 +0000 UTC m=+0.039821337 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:23 localhost podman[62070]: 2025-10-14 08:18:23.746652285 +0000 UTC m=+0.147224485 container init b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, container_name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, batch=17.1_20250721.1, architecture=x86_64) Oct 14 04:18:23 localhost podman[62070]: 2025-10-14 08:18:23.756389594 +0000 UTC m=+0.156961794 container start b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.33.12, release=2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, tcib_managed=true, vendor=Red Hat, Inc.) Oct 14 04:18:23 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:23 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:23 localhost systemd[1]: Started Session c3 of User root. Oct 14 04:18:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e-userdata-shm.mount: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: var-lib-containers-storage-overlay-09d529a5e87063d6d8be572e15ccc1a6e2cd4e03cf8d02224d51bfc8e004317f-merged.mount: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6-userdata-shm.mount: Deactivated successfully. Oct 14 04:18:23 localhost systemd[1]: session-c3.scope: Deactivated successfully. Oct 14 04:18:24 localhost podman[62218]: 2025-10-14 08:18:24.232198044 +0000 UTC m=+0.108760686 container create df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:18:24 localhost podman[62225]: 2025-10-14 08:18:24.264090719 +0000 UTC m=+0.116164752 container create 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_virtnodedevd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1) Oct 14 04:18:24 localhost podman[62218]: 2025-10-14 08:18:24.184795236 +0000 UTC m=+0.061357938 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 14 04:18:24 localhost systemd[1]: Started libpod-conmon-df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.scope. Oct 14 04:18:24 localhost podman[62225]: 2025-10-14 08:18:24.195284994 +0000 UTC m=+0.047359087 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:24 localhost systemd[1]: Started libpod-conmon-30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed.scope. Oct 14 04:18:24 localhost systemd[1]: Started libcrun container. Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa0ed2f8930991b55c20b14a15d726f2d078ff05272993cec0208c15a14da5/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost systemd[1]: Started libcrun container. Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa0ed2f8930991b55c20b14a15d726f2d078ff05272993cec0208c15a14da5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:24 localhost podman[62225]: 2025-10-14 08:18:24.332453042 +0000 UTC m=+0.184527095 container init 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:18:24 localhost podman[62225]: 2025-10-14 08:18:24.343400482 +0000 UTC m=+0.195474515 container start 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, container_name=nova_virtnodedevd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 04:18:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:18:24 localhost podman[62218]: 2025-10-14 08:18:24.351462006 +0000 UTC m=+0.228024698 container init df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9) Oct 14 04:18:24 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:18:24 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:24 localhost podman[62218]: 2025-10-14 08:18:24.392510105 +0000 UTC m=+0.269072767 container start df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 14 04:18:24 localhost systemd[1]: Started Session c4 of User root. Oct 14 04:18:24 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:24 localhost systemd[1]: Started Session c5 of User root. Oct 14 04:18:24 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bd9d045a0b37801182392caf49375c15 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 14 04:18:24 localhost systemd[1]: session-c4.scope: Deactivated successfully. Oct 14 04:18:24 localhost systemd[1]: session-c5.scope: Deactivated successfully. Oct 14 04:18:24 localhost kernel: Loading iSCSI transport class v2.0-870. Oct 14 04:18:24 localhost podman[62270]: 2025-10-14 08:18:24.515944019 +0000 UTC m=+0.111075167 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:18:24 localhost podman[62270]: 2025-10-14 08:18:24.606322096 +0000 UTC m=+0.201453234 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, release=1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15) Oct 14 04:18:24 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:18:24 localhost podman[62408]: 2025-10-14 08:18:24.984172448 +0000 UTC m=+0.085101718 container create 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, build-date=2025-07-21T14:56:59, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 04:18:25 localhost systemd[1]: Started libpod-conmon-5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4.scope. Oct 14 04:18:25 localhost podman[62408]: 2025-10-14 08:18:24.936542654 +0000 UTC m=+0.037471994 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:25 localhost systemd[1]: Started libcrun container. Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost podman[62408]: 2025-10-14 08:18:25.061333383 +0000 UTC m=+0.162262633 container init 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtstoraged, managed_by=tripleo_ansible, architecture=x86_64, release=2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:18:25 localhost podman[62408]: 2025-10-14 08:18:25.073099986 +0000 UTC m=+0.174029236 container start 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtstoraged, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, release=2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc.) Oct 14 04:18:25 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:25 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:25 localhost systemd[1]: Started Session c6 of User root. Oct 14 04:18:25 localhost systemd[1]: session-c6.scope: Deactivated successfully. Oct 14 04:18:25 localhost podman[62514]: 2025-10-14 08:18:25.592219824 +0000 UTC m=+0.095406071 container create 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2025-07-21T14:56:59) Oct 14 04:18:25 localhost systemd[1]: Started libpod-conmon-005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893.scope. Oct 14 04:18:25 localhost podman[62514]: 2025-10-14 08:18:25.548116274 +0000 UTC m=+0.051302551 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:25 localhost systemd[1]: Started libcrun container. Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:25 localhost podman[62514]: 2025-10-14 08:18:25.681572344 +0000 UTC m=+0.184758591 container init 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, container_name=nova_virtqemud) Oct 14 04:18:25 localhost podman[62514]: 2025-10-14 08:18:25.690990524 +0000 UTC m=+0.194176781 container start 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:18:25 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:25 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:25 localhost systemd[1]: Started Session c7 of User root. Oct 14 04:18:25 localhost systemd[1]: session-c7.scope: Deactivated successfully. Oct 14 04:18:26 localhost podman[62619]: 2025-10-14 08:18:26.19076835 +0000 UTC m=+0.088982652 container create 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt) Oct 14 04:18:26 localhost podman[62619]: 2025-10-14 08:18:26.14100545 +0000 UTC m=+0.039219812 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:26 localhost systemd[1]: Started libpod-conmon-2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd.scope. Oct 14 04:18:26 localhost systemd[1]: Started libcrun container. Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:26 localhost podman[62619]: 2025-10-14 08:18:26.275291821 +0000 UTC m=+0.173506103 container init 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, container_name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 04:18:26 localhost podman[62619]: 2025-10-14 08:18:26.287065363 +0000 UTC m=+0.185279645 container start 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_virtproxyd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, tcib_managed=true) Oct 14 04:18:26 localhost python3[61491]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:18:26 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:18:26 localhost systemd[1]: Started Session c8 of User root. Oct 14 04:18:26 localhost systemd[1]: session-c8.scope: Deactivated successfully. Oct 14 04:18:26 localhost python3[62701]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:27 localhost python3[62717]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:27 localhost python3[62733]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:27 localhost python3[62749]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:28 localhost python3[62765]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:28 localhost python3[62781]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:28 localhost python3[62798]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:28 localhost python3[62814]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:29 localhost python3[62830]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:29 localhost python3[62846]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:29 localhost python3[62862]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:29 localhost python3[62878]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:30 localhost python3[62894]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:30 localhost python3[62910]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:30 localhost python3[62926]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:30 localhost python3[62942]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:31 localhost python3[62958]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:31 localhost python3[62974]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:18:32 localhost python3[63035]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:32 localhost python3[63064]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:33 localhost python3[63093]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:33 localhost python3[63122]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:34 localhost python3[63151]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:34 localhost python3[63180]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:35 localhost python3[63210]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:35 localhost python3[63239]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:36 localhost python3[63268]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760429911.4517477-100899-19316474349178/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:36 localhost python3[63284]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 04:18:36 localhost systemd[1]: Reloading. Oct 14 04:18:36 localhost systemd-sysv-generator[63308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:36 localhost systemd-rc-local-generator[63301]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:36 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 04:18:36 localhost systemd[61787]: Activating special unit Exit the Session... Oct 14 04:18:36 localhost systemd[61787]: Stopped target Main User Target. Oct 14 04:18:36 localhost systemd[61787]: Stopped target Basic System. Oct 14 04:18:36 localhost systemd[61787]: Stopped target Paths. Oct 14 04:18:36 localhost systemd[61787]: Stopped target Sockets. Oct 14 04:18:36 localhost systemd[61787]: Stopped target Timers. Oct 14 04:18:36 localhost systemd[61787]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 04:18:36 localhost systemd[61787]: Closed D-Bus User Message Bus Socket. Oct 14 04:18:36 localhost systemd[61787]: Stopped Create User's Volatile Files and Directories. Oct 14 04:18:36 localhost systemd[61787]: Removed slice User Application Slice. Oct 14 04:18:36 localhost systemd[61787]: Reached target Shutdown. Oct 14 04:18:36 localhost systemd[61787]: Finished Exit the Session. Oct 14 04:18:36 localhost systemd[61787]: Reached target Exit the Session. Oct 14 04:18:36 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 04:18:36 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 04:18:36 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 04:18:36 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 04:18:36 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 04:18:36 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 04:18:36 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 04:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:18:37 localhost python3[63339]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:37 localhost podman[63340]: 2025-10-14 08:18:37.549742178 +0000 UTC m=+0.084110442 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64) Oct 14 04:18:37 localhost systemd[1]: Reloading. Oct 14 04:18:37 localhost systemd-rc-local-generator[63393]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:37 localhost systemd-sysv-generator[63397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:37 localhost podman[63340]: 2025-10-14 08:18:37.794915031 +0000 UTC m=+0.329283355 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:18:37 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:18:37 localhost systemd[1]: Starting collectd container... Oct 14 04:18:37 localhost systemd[1]: Started collectd container. Oct 14 04:18:38 localhost python3[63434]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:38 localhost systemd[1]: Reloading. Oct 14 04:18:38 localhost systemd-rc-local-generator[63462]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:38 localhost systemd-sysv-generator[63467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:38 localhost systemd[1]: Starting iscsid container... Oct 14 04:18:39 localhost systemd[1]: Started iscsid container. Oct 14 04:18:39 localhost python3[63501]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:39 localhost systemd[1]: Reloading. Oct 14 04:18:39 localhost systemd-sysv-generator[63534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:39 localhost systemd-rc-local-generator[63530]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:39 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Oct 14 04:18:40 localhost systemd[1]: Started nova_virtlogd_wrapper container. Oct 14 04:18:40 localhost python3[63567]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:40 localhost systemd[1]: Reloading. Oct 14 04:18:40 localhost systemd-rc-local-generator[63593]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:40 localhost systemd-sysv-generator[63597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:41 localhost systemd[1]: Starting nova_virtnodedevd container... Oct 14 04:18:41 localhost tripleo-start-podman-container[63607]: Creating additional drop-in dependency for "nova_virtnodedevd" (30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed) Oct 14 04:18:41 localhost systemd[1]: Reloading. Oct 14 04:18:41 localhost systemd-rc-local-generator[63662]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:41 localhost systemd-sysv-generator[63667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:41 localhost systemd[1]: Started nova_virtnodedevd container. Oct 14 04:18:42 localhost python3[63690]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:42 localhost systemd[1]: Reloading. Oct 14 04:18:42 localhost systemd-rc-local-generator[63715]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:42 localhost systemd-sysv-generator[63721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:42 localhost systemd[1]: Starting nova_virtproxyd container... Oct 14 04:18:42 localhost tripleo-start-podman-container[63730]: Creating additional drop-in dependency for "nova_virtproxyd" (2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd) Oct 14 04:18:42 localhost systemd[1]: Reloading. Oct 14 04:18:42 localhost systemd-rc-local-generator[63788]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:42 localhost systemd-sysv-generator[63793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:43 localhost systemd[1]: Started nova_virtproxyd container. Oct 14 04:18:43 localhost python3[63815]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:43 localhost systemd[1]: Reloading. Oct 14 04:18:43 localhost systemd-rc-local-generator[63844]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:43 localhost systemd-sysv-generator[63847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:44 localhost systemd[1]: Starting nova_virtqemud container... Oct 14 04:18:44 localhost tripleo-start-podman-container[63855]: Creating additional drop-in dependency for "nova_virtqemud" (005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893) Oct 14 04:18:44 localhost systemd[1]: Reloading. Oct 14 04:18:44 localhost systemd-sysv-generator[63913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:44 localhost systemd-rc-local-generator[63909]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:44 localhost systemd[1]: Started nova_virtqemud container. Oct 14 04:18:45 localhost python3[63938]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:45 localhost systemd[1]: Reloading. Oct 14 04:18:45 localhost systemd-rc-local-generator[63962]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:45 localhost systemd-sysv-generator[63966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:45 localhost systemd[1]: Starting nova_virtsecretd container... Oct 14 04:18:45 localhost tripleo-start-podman-container[63978]: Creating additional drop-in dependency for "nova_virtsecretd" (b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433) Oct 14 04:18:45 localhost systemd[1]: Reloading. Oct 14 04:18:45 localhost systemd-sysv-generator[64035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:45 localhost systemd-rc-local-generator[64031]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:46 localhost systemd[1]: Started nova_virtsecretd container. Oct 14 04:18:46 localhost python3[64061]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:46 localhost systemd[1]: Reloading. Oct 14 04:18:47 localhost systemd-rc-local-generator[64087]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:47 localhost systemd-sysv-generator[64093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:47 localhost systemd[1]: Starting nova_virtstoraged container... Oct 14 04:18:47 localhost tripleo-start-podman-container[64100]: Creating additional drop-in dependency for "nova_virtstoraged" (5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4) Oct 14 04:18:47 localhost systemd[1]: Reloading. Oct 14 04:18:47 localhost systemd-sysv-generator[64161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:47 localhost systemd-rc-local-generator[64156]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:47 localhost systemd[1]: Started nova_virtstoraged container. Oct 14 04:18:48 localhost python3[64184]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:18:48 localhost systemd[1]: Reloading. Oct 14 04:18:48 localhost systemd-rc-local-generator[64208]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:18:48 localhost systemd-sysv-generator[64212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:18:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:18:48 localhost systemd[1]: Starting rsyslog container... Oct 14 04:18:48 localhost systemd[1]: Started libcrun container. Oct 14 04:18:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:48 localhost podman[64224]: 2025-10-14 08:18:48.826604429 +0000 UTC m=+0.136282306 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, release=1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, container_name=rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 14 04:18:48 localhost podman[64224]: 2025-10-14 08:18:48.839274665 +0000 UTC m=+0.148952542 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, batch=17.1_20250721.1, version=17.1.9) Oct 14 04:18:48 localhost podman[64224]: rsyslog Oct 14 04:18:48 localhost systemd[1]: Started rsyslog container. Oct 14 04:18:48 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:48 localhost podman[64246]: 2025-10-14 08:18:48.992760946 +0000 UTC m=+0.046192766 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, vcs-type=git, build-date=2025-07-21T12:58:40, container_name=rsyslog, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-rsyslog-container) Oct 14 04:18:49 localhost podman[64246]: 2025-10-14 08:18:49.017200554 +0000 UTC m=+0.070632304 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T12:58:40, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, container_name=rsyslog, io.openshift.expose-services=, tcib_managed=true) Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:18:49 localhost podman[64273]: 2025-10-14 08:18:49.10111061 +0000 UTC m=+0.047964673 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, container_name=rsyslog, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, vcs-type=git, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 04:18:49 localhost podman[64273]: rsyslog Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Oct 14 04:18:49 localhost systemd[1]: Stopped rsyslog container. Oct 14 04:18:49 localhost systemd[1]: Starting rsyslog container... Oct 14 04:18:49 localhost systemd[1]: Started libcrun container. Oct 14 04:18:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:49 localhost podman[64301]: 2025-10-14 08:18:49.383305804 +0000 UTC m=+0.129382863 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, release=1) Oct 14 04:18:49 localhost podman[64301]: 2025-10-14 08:18:49.393857113 +0000 UTC m=+0.139934182 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, config_id=tripleo_step3, container_name=rsyslog, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 14 04:18:49 localhost podman[64301]: rsyslog Oct 14 04:18:49 localhost systemd[1]: Started rsyslog container. Oct 14 04:18:49 localhost python3[64302]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:49 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:49 localhost podman[64324]: 2025-10-14 08:18:49.541422748 +0000 UTC m=+0.041495072 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 04:18:49 localhost podman[64324]: 2025-10-14 08:18:49.560831902 +0000 UTC m=+0.060904206 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, release=1, build-date=2025-07-21T12:58:40, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:18:49 localhost podman[64338]: 2025-10-14 08:18:49.638194395 +0000 UTC m=+0.054156969 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, tcib_managed=true, architecture=x86_64, container_name=rsyslog, managed_by=tripleo_ansible, config_id=tripleo_step3) Oct 14 04:18:49 localhost podman[64338]: rsyslog Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:49 localhost systemd[1]: var-lib-containers-storage-overlay-281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48-merged.mount: Deactivated successfully. Oct 14 04:18:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e-userdata-shm.mount: Deactivated successfully. Oct 14 04:18:49 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Oct 14 04:18:49 localhost systemd[1]: Stopped rsyslog container. Oct 14 04:18:49 localhost systemd[1]: Starting rsyslog container... Oct 14 04:18:50 localhost systemd[1]: Started libcrun container. Oct 14 04:18:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:50 localhost podman[64380]: 2025-10-14 08:18:50.095941685 +0000 UTC m=+0.135040483 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, release=1, container_name=rsyslog, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible) Oct 14 04:18:50 localhost podman[64380]: 2025-10-14 08:18:50.105693803 +0000 UTC m=+0.144792601 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1, batch=17.1_20250721.1, container_name=rsyslog, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 14 04:18:50 localhost podman[64380]: rsyslog Oct 14 04:18:50 localhost systemd[1]: Started rsyslog container. Oct 14 04:18:50 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:50 localhost podman[64421]: 2025-10-14 08:18:50.259559444 +0000 UTC m=+0.056601452 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=rsyslog, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, build-date=2025-07-21T12:58:40, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog) Oct 14 04:18:50 localhost podman[64421]: 2025-10-14 08:18:50.282558895 +0000 UTC m=+0.079600853 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, container_name=rsyslog, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:18:50 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:18:50 localhost podman[64435]: 2025-10-14 08:18:50.364875078 +0000 UTC m=+0.058754680 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-rsyslog-container, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team) Oct 14 04:18:50 localhost podman[64435]: rsyslog Oct 14 04:18:50 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:50 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Oct 14 04:18:50 localhost systemd[1]: Stopped rsyslog container. Oct 14 04:18:50 localhost systemd[1]: Starting rsyslog container... Oct 14 04:18:50 localhost systemd[1]: Started libcrun container. Oct 14 04:18:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:50 localhost podman[64490]: 2025-10-14 08:18:50.643052125 +0000 UTC m=+0.120632180 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, tcib_managed=true, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., vcs-type=git) Oct 14 04:18:50 localhost podman[64490]: 2025-10-14 08:18:50.653482392 +0000 UTC m=+0.131062407 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:18:50 localhost podman[64490]: rsyslog Oct 14 04:18:50 localhost systemd[1]: Started rsyslog container. Oct 14 04:18:50 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:50 localhost podman[64526]: 2025-10-14 08:18:50.829991014 +0000 UTC m=+0.056676614 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, architecture=x86_64, container_name=rsyslog, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 14 04:18:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e-userdata-shm.mount: Deactivated successfully. Oct 14 04:18:50 localhost systemd[1]: var-lib-containers-storage-overlay-281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48-merged.mount: Deactivated successfully. Oct 14 04:18:50 localhost podman[64526]: 2025-10-14 08:18:50.854876554 +0000 UTC m=+0.081562114 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12) Oct 14 04:18:50 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:18:50 localhost podman[64539]: 2025-10-14 08:18:50.951566028 +0000 UTC m=+0.062936570 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, container_name=rsyslog, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 rsyslog) Oct 14 04:18:50 localhost podman[64539]: rsyslog Oct 14 04:18:50 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:51 localhost python3[64565]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005486731 step=3 update_config_hash_only=False Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Oct 14 04:18:51 localhost systemd[1]: Stopped rsyslog container. Oct 14 04:18:51 localhost systemd[1]: Starting rsyslog container... Oct 14 04:18:51 localhost systemd[1]: Started libcrun container. Oct 14 04:18:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 14 04:18:51 localhost podman[64568]: 2025-10-14 08:18:51.331340891 +0000 UTC m=+0.117922789 container init 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, build-date=2025-07-21T12:58:40, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1) Oct 14 04:18:51 localhost podman[64568]: 2025-10-14 08:18:51.342685212 +0000 UTC m=+0.129267120 container start 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., tcib_managed=true, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, version=17.1.9, managed_by=tripleo_ansible) Oct 14 04:18:51 localhost podman[64568]: rsyslog Oct 14 04:18:51 localhost systemd[1]: Started rsyslog container. Oct 14 04:18:51 localhost systemd[1]: libpod-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e.scope: Deactivated successfully. Oct 14 04:18:51 localhost podman[64590]: 2025-10-14 08:18:51.518142536 +0000 UTC m=+0.054995680 container died 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, batch=17.1_20250721.1, container_name=rsyslog, release=1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 14 04:18:51 localhost podman[64590]: 2025-10-14 08:18:51.541945706 +0000 UTC m=+0.078798820 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, release=1, tcib_managed=true, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.expose-services=, build-date=2025-07-21T12:58:40, io.buildah.version=1.33.12, config_id=tripleo_step3, architecture=x86_64) Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:18:51 localhost podman[64617]: 2025-10-14 08:18:51.63483982 +0000 UTC m=+0.058424191 container cleanup 74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1, container_name=rsyslog, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-07-21T12:58:40, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4c9706ce89053601d63131b238721a51'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git) Oct 14 04:18:51 localhost podman[64617]: rsyslog Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:51 localhost python3[64618]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:18:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74bc301251db53c71b1eab1a566d66b07cc508e6de70917e68b2bdc985fd1a8e-userdata-shm.mount: Deactivated successfully. Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Oct 14 04:18:51 localhost systemd[1]: Stopped rsyslog container. Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Oct 14 04:18:51 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 14 04:18:51 localhost systemd[1]: Failed to start rsyslog container. Oct 14 04:18:52 localhost python3[64644]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:18:53 localhost podman[64645]: 2025-10-14 08:18:53.513731074 +0000 UTC m=+0.061185714 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=) Oct 14 04:18:53 localhost podman[64645]: 2025-10-14 08:18:53.525894316 +0000 UTC m=+0.073348976 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team) Oct 14 04:18:53 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:18:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:18:55 localhost systemd[1]: tmp-crun.YdnO32.mount: Deactivated successfully. Oct 14 04:18:55 localhost podman[64666]: 2025-10-14 08:18:55.541150716 +0000 UTC m=+0.084796511 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:18:55 localhost podman[64666]: 2025-10-14 08:18:55.579148443 +0000 UTC m=+0.122794268 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid) Oct 14 04:18:55 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:19:08 localhost podman[64685]: 2025-10-14 08:19:08.560827588 +0000 UTC m=+0.091686712 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, tcib_managed=true) Oct 14 04:19:08 localhost podman[64685]: 2025-10-14 08:19:08.761961623 +0000 UTC m=+0.292820677 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:19:08 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:19:24 localhost podman[64791]: 2025-10-14 08:19:24.560438998 +0000 UTC m=+0.094653801 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Oct 14 04:19:24 localhost podman[64791]: 2025-10-14 08:19:24.576073813 +0000 UTC m=+0.110288656 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, release=2, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:19:24 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:19:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:19:26 localhost podman[64810]: 2025-10-14 08:19:26.563344431 +0000 UTC m=+0.094412916 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:19:26 localhost podman[64810]: 2025-10-14 08:19:26.576179251 +0000 UTC m=+0.107247736 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, release=1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:19:26 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:19:39 localhost podman[64829]: 2025-10-14 08:19:39.527070051 +0000 UTC m=+0.069505824 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=) Oct 14 04:19:39 localhost podman[64829]: 2025-10-14 08:19:39.765233168 +0000 UTC m=+0.307668981 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:19:39 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:19:55 localhost podman[64856]: 2025-10-14 08:19:55.539900154 +0000 UTC m=+0.081799571 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 14 04:19:55 localhost podman[64856]: 2025-10-14 08:19:55.574651186 +0000 UTC m=+0.116550633 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc.) Oct 14 04:19:55 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:19:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:19:57 localhost podman[64876]: 2025-10-14 08:19:57.541796709 +0000 UTC m=+0.084423500 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-iscsid-container, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid) Oct 14 04:19:57 localhost podman[64876]: 2025-10-14 08:19:57.576217351 +0000 UTC m=+0.118844102 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, release=1, batch=17.1_20250721.1, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=) Oct 14 04:19:57 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:20:10 localhost podman[64895]: 2025-10-14 08:20:10.566810584 +0000 UTC m=+0.095010651 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1, tcib_managed=true) Oct 14 04:20:10 localhost podman[64895]: 2025-10-14 08:20:10.761095188 +0000 UTC m=+0.289295245 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:20:10 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:20:26 localhost podman[65000]: 2025-10-14 08:20:26.556917921 +0000 UTC m=+0.095648874 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, release=2, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, architecture=x86_64, batch=17.1_20250721.1) Oct 14 04:20:26 localhost podman[65000]: 2025-10-14 08:20:26.567703448 +0000 UTC m=+0.106434381 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 04:20:26 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:20:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:20:28 localhost podman[65020]: 2025-10-14 08:20:28.528169795 +0000 UTC m=+0.073881715 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, release=1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Oct 14 04:20:28 localhost podman[65020]: 2025-10-14 08:20:28.56520661 +0000 UTC m=+0.110918520 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:20:28 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:20:41 localhost podman[65041]: 2025-10-14 08:20:41.540321066 +0000 UTC m=+0.079330430 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1) Oct 14 04:20:41 localhost podman[65041]: 2025-10-14 08:20:41.729340061 +0000 UTC m=+0.268349415 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, version=17.1.9, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, tcib_managed=true, vendor=Red Hat, Inc.) Oct 14 04:20:41 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:20:57 localhost podman[65070]: 2025-10-14 08:20:57.554260272 +0000 UTC m=+0.090354272 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd) Oct 14 04:20:57 localhost podman[65070]: 2025-10-14 08:20:57.594212315 +0000 UTC m=+0.130306305 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, container_name=collectd, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, release=2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git) Oct 14 04:20:57 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:20:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:20:59 localhost systemd[1]: tmp-crun.jo3uyr.mount: Deactivated successfully. Oct 14 04:20:59 localhost podman[65090]: 2025-10-14 08:20:59.535202235 +0000 UTC m=+0.076056683 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container) Oct 14 04:20:59 localhost podman[65090]: 2025-10-14 08:20:59.568165591 +0000 UTC m=+0.109020049 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:20:59 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:21:12 localhost podman[65122]: 2025-10-14 08:21:12.180927923 +0000 UTC m=+0.092545902 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1) Oct 14 04:21:12 localhost podman[65122]: 2025-10-14 08:21:12.376805251 +0000 UTC m=+0.288423260 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, batch=17.1_20250721.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:21:12 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:21:28 localhost systemd[1]: tmp-crun.gIiNRU.mount: Deactivated successfully. Oct 14 04:21:28 localhost podman[65213]: 2025-10-14 08:21:28.56010135 +0000 UTC m=+0.096407894 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, container_name=collectd, architecture=x86_64, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd) Oct 14 04:21:28 localhost podman[65213]: 2025-10-14 08:21:28.571998377 +0000 UTC m=+0.108304881 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, release=2, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03) Oct 14 04:21:28 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:21:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:21:30 localhost podman[65230]: 2025-10-14 08:21:30.53419835 +0000 UTC m=+0.074310177 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, container_name=iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1) Oct 14 04:21:30 localhost podman[65230]: 2025-10-14 08:21:30.573154966 +0000 UTC m=+0.113266793 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Oct 14 04:21:30 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:21:42 localhost podman[65249]: 2025-10-14 08:21:42.53987564 +0000 UTC m=+0.079955657 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, version=17.1.9, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:21:42 localhost podman[65249]: 2025-10-14 08:21:42.764363339 +0000 UTC m=+0.304443326 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 14 04:21:42 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:21:59 localhost systemd[1]: tmp-crun.P6Lqmj.mount: Deactivated successfully. Oct 14 04:21:59 localhost podman[65279]: 2025-10-14 08:21:59.547069056 +0000 UTC m=+0.087610561 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, container_name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:21:59 localhost podman[65279]: 2025-10-14 08:21:59.558709815 +0000 UTC m=+0.099251320 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, container_name=collectd, release=2, build-date=2025-07-21T13:04:03) Oct 14 04:21:59 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:22:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:22:01 localhost podman[65301]: 2025-10-14 08:22:01.192643279 +0000 UTC m=+0.072252082 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, container_name=iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true) Oct 14 04:22:01 localhost podman[65301]: 2025-10-14 08:22:01.232438917 +0000 UTC m=+0.112047760 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, batch=17.1_20250721.1, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:22:01 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:22:13 localhost podman[65321]: 2025-10-14 08:22:13.540765224 +0000 UTC m=+0.080535122 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, release=1) Oct 14 04:22:13 localhost podman[65321]: 2025-10-14 08:22:13.725123666 +0000 UTC m=+0.264893514 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:22:13 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:22:30 localhost podman[65477]: 2025-10-14 08:22:30.563958516 +0000 UTC m=+0.097356050 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=) Oct 14 04:22:30 localhost podman[65477]: 2025-10-14 08:22:30.600389974 +0000 UTC m=+0.133787558 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, architecture=x86_64, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, container_name=collectd, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container) Oct 14 04:22:30 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:22:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:22:31 localhost podman[65496]: 2025-10-14 08:22:31.542045292 +0000 UTC m=+0.077825741 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, architecture=x86_64) Oct 14 04:22:31 localhost podman[65496]: 2025-10-14 08:22:31.581278185 +0000 UTC m=+0.117058614 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, architecture=x86_64) Oct 14 04:22:31 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5222 writes, 23K keys, 5222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5222 writes, 566 syncs, 9.23 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 311 writes, 719 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s#012Interval WAL: 311 writes, 155 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:22:44 localhost podman[65515]: 2025-10-14 08:22:44.556296298 +0000 UTC m=+0.096368652 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:22:44 localhost podman[65515]: 2025-10-14 08:22:44.763224879 +0000 UTC m=+0.303297223 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:22:44 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4291 writes, 19K keys, 4291 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4291 writes, 450 syncs, 9.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 345 writes, 733 keys, 345 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 345 writes, 171 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:22:51 localhost python3[65591]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:22:52 localhost python3[65636]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430171.1812363-108133-46599845932635/source _original_basename=tmpboaf72n0 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:22:53 localhost python3[65698]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:22:53 localhost python3[65741]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430172.9286306-108223-152079806929014/source _original_basename=tmpgxawc7iz follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:22:54 localhost python3[65803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:22:54 localhost python3[65846]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430173.8989918-108424-222753726488333/source _original_basename=tmp8duwb5lj follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:22:55 localhost python3[65908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:22:55 localhost python3[65951]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430174.9381318-108489-156647651140034/source _original_basename=tmpb9uwpco3 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:22:56 localhost python3[65981]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 04:22:56 localhost systemd[1]: Reloading. Oct 14 04:22:56 localhost systemd-sysv-generator[66009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:22:56 localhost systemd-rc-local-generator[66004]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:22:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:22:57 localhost systemd[1]: Reloading. Oct 14 04:22:57 localhost systemd-rc-local-generator[66044]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:22:57 localhost systemd-sysv-generator[66049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:22:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:22:57 localhost python3[66071]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:22:57 localhost systemd[1]: Reloading. Oct 14 04:22:58 localhost systemd-rc-local-generator[66098]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:22:58 localhost systemd-sysv-generator[66102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:22:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:22:58 localhost systemd[1]: Reloading. Oct 14 04:22:58 localhost systemd-rc-local-generator[66137]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:22:58 localhost systemd-sysv-generator[66141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:22:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:22:58 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Oct 14 04:22:58 localhost python3[66163]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:22:58 localhost systemd[1]: Reloading. Oct 14 04:22:59 localhost systemd-sysv-generator[66194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:22:59 localhost systemd-rc-local-generator[66189]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:22:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:22:59 localhost python3[66247]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:00 localhost python3[66290]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430179.4620247-108595-256052916918568/source _original_basename=tmp10zt0r6x follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:00 localhost python3[66320]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:23:00 localhost systemd[1]: Reloading. Oct 14 04:23:00 localhost systemd-rc-local-generator[66354]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:00 localhost systemd-sysv-generator[66358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:00 localhost podman[66322]: 2025-10-14 08:23:00.901586744 +0000 UTC m=+0.102832286 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible) Oct 14 04:23:00 localhost podman[66322]: 2025-10-14 08:23:00.914673482 +0000 UTC m=+0.115919014 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_id=tripleo_step3, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03) Oct 14 04:23:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:01 localhost systemd[1]: tmp-crun.34tvPa.mount: Deactivated successfully. Oct 14 04:23:01 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:23:01 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Oct 14 04:23:01 localhost python3[66392]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:23:02 localhost podman[66443]: 2025-10-14 08:23:02.06545051 +0000 UTC m=+0.090109347 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, release=1, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Oct 14 04:23:02 localhost podman[66443]: 2025-10-14 08:23:02.10307243 +0000 UTC m=+0.127731267 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public) Oct 14 04:23:02 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:23:03 localhost ansible-async_wrapper.py[66583]: Invoked with 303870500789 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430182.7265313-108708-7991986987480/AnsiballZ_command.py _ Oct 14 04:23:03 localhost ansible-async_wrapper.py[66586]: Starting module and watcher Oct 14 04:23:03 localhost ansible-async_wrapper.py[66586]: Start watching 66587 (3600) Oct 14 04:23:03 localhost ansible-async_wrapper.py[66587]: Start module (66587) Oct 14 04:23:03 localhost ansible-async_wrapper.py[66583]: Return async_wrapper task started. Oct 14 04:23:03 localhost python3[66607]: ansible-ansible.legacy.async_status Invoked with jid=303870500789.66583 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:23:06 localhost puppet-user[66605]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:23:06 localhost puppet-user[66605]: (file: /etc/puppet/hiera.yaml) Oct 14 04:23:06 localhost puppet-user[66605]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:23:06 localhost puppet-user[66605]: (file & line not available) Oct 14 04:23:06 localhost puppet-user[66605]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:23:06 localhost puppet-user[66605]: (file & line not available) Oct 14 04:23:06 localhost puppet-user[66605]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:23:07 localhost puppet-user[66605]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:23:07 localhost puppet-user[66605]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:23:07 localhost puppet-user[66605]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:23:07 localhost puppet-user[66605]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:23:07 localhost puppet-user[66605]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:23:07 localhost puppet-user[66605]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:23:07 localhost puppet-user[66605]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 14 04:23:07 localhost puppet-user[66605]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.22 seconds Oct 14 04:23:08 localhost ansible-async_wrapper.py[66586]: 66587 still running (3600) Oct 14 04:23:13 localhost ansible-async_wrapper.py[66586]: 66587 still running (3595) Oct 14 04:23:13 localhost python3[66801]: ansible-ansible.legacy.async_status Invoked with jid=303870500789.66583 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:23:15 localhost podman[66808]: 2025-10-14 08:23:15.562812152 +0000 UTC m=+0.097353230 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:07:59, vcs-type=git) Oct 14 04:23:15 localhost podman[66808]: 2025-10-14 08:23:15.743911516 +0000 UTC m=+0.278452574 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, version=17.1.9) Oct 14 04:23:15 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:23:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 04:23:16 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 04:23:16 localhost systemd[1]: Reloading. Oct 14 04:23:16 localhost systemd-rc-local-generator[66913]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:16 localhost systemd-sysv-generator[66920]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:16 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 04:23:17 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 04:23:17 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 04:23:17 localhost systemd[1]: man-db-cache-update.service: Consumed 1.121s CPU time. Oct 14 04:23:17 localhost systemd[1]: run-rb2d73cd9af544a059d877db851a1f2f2.service: Deactivated successfully. Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}762de3e4fea65813aacbc65d241e205eb613318b51d7eb360bc28dbfb975b7ca' Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Oct 14 04:23:17 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Oct 14 04:23:18 localhost ansible-async_wrapper.py[66586]: 66587 still running (3590) Oct 14 04:23:22 localhost puppet-user[66605]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Oct 14 04:23:23 localhost systemd[1]: Reloading. Oct 14 04:23:23 localhost systemd-rc-local-generator[67999]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:23 localhost systemd-sysv-generator[68002]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:23 localhost ansible-async_wrapper.py[66586]: 66587 still running (3585) Oct 14 04:23:23 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Oct 14 04:23:23 localhost snmpd[68028]: Can't find directory of RPM packages Oct 14 04:23:23 localhost snmpd[68028]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Oct 14 04:23:23 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Oct 14 04:23:23 localhost systemd[1]: Reloading. Oct 14 04:23:23 localhost systemd-sysv-generator[68091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:23 localhost systemd-rc-local-generator[68088]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:23 localhost systemd[1]: Reloading. Oct 14 04:23:23 localhost systemd-rc-local-generator[68120]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:23 localhost systemd-sysv-generator[68125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:24 localhost puppet-user[66605]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Oct 14 04:23:24 localhost puppet-user[66605]: Notice: Applied catalog in 17.03 seconds Oct 14 04:23:24 localhost puppet-user[66605]: Application: Oct 14 04:23:24 localhost puppet-user[66605]: Initial environment: production Oct 14 04:23:24 localhost puppet-user[66605]: Converged environment: production Oct 14 04:23:24 localhost puppet-user[66605]: Run mode: user Oct 14 04:23:24 localhost puppet-user[66605]: Changes: Oct 14 04:23:24 localhost puppet-user[66605]: Total: 8 Oct 14 04:23:24 localhost puppet-user[66605]: Events: Oct 14 04:23:24 localhost puppet-user[66605]: Success: 8 Oct 14 04:23:24 localhost puppet-user[66605]: Total: 8 Oct 14 04:23:24 localhost puppet-user[66605]: Resources: Oct 14 04:23:24 localhost puppet-user[66605]: Restarted: 1 Oct 14 04:23:24 localhost puppet-user[66605]: Changed: 8 Oct 14 04:23:24 localhost puppet-user[66605]: Out of sync: 8 Oct 14 04:23:24 localhost puppet-user[66605]: Total: 19 Oct 14 04:23:24 localhost puppet-user[66605]: Time: Oct 14 04:23:24 localhost puppet-user[66605]: Schedule: 0.00 Oct 14 04:23:24 localhost puppet-user[66605]: Augeas: 0.01 Oct 14 04:23:24 localhost puppet-user[66605]: File: 0.07 Oct 14 04:23:24 localhost puppet-user[66605]: Config retrieval: 0.28 Oct 14 04:23:24 localhost puppet-user[66605]: Service: 1.26 Oct 14 04:23:24 localhost puppet-user[66605]: Package: 10.46 Oct 14 04:23:24 localhost puppet-user[66605]: Transaction evaluation: 17.03 Oct 14 04:23:24 localhost puppet-user[66605]: Catalog application: 17.03 Oct 14 04:23:24 localhost puppet-user[66605]: Last run: 1760430204 Oct 14 04:23:24 localhost puppet-user[66605]: Exec: 5.06 Oct 14 04:23:24 localhost puppet-user[66605]: Filebucket: 0.00 Oct 14 04:23:24 localhost puppet-user[66605]: Total: 17.04 Oct 14 04:23:24 localhost puppet-user[66605]: Version: Oct 14 04:23:24 localhost puppet-user[66605]: Config: 1760430186 Oct 14 04:23:24 localhost puppet-user[66605]: Puppet: 7.10.0 Oct 14 04:23:24 localhost ansible-async_wrapper.py[66587]: Module complete (66587) Oct 14 04:23:24 localhost python3[68149]: ansible-ansible.legacy.async_status Invoked with jid=303870500789.66583 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:23:24 localhost python3[68225]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:23:25 localhost python3[68256]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:25 localhost podman[68285]: Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.415454514 +0000 UTC m=+0.060202822 container create 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph) Oct 14 04:23:25 localhost systemd[1]: Started libpod-conmon-69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9.scope. Oct 14 04:23:25 localhost systemd[1]: Started libcrun container. Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.389757371 +0000 UTC m=+0.034505669 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.499226892 +0000 UTC m=+0.143975210 container init 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.512842744 +0000 UTC m=+0.157591092 container start 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, distribution-scope=public) Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.513224034 +0000 UTC m=+0.157972322 container attach 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, io.openshift.tags=rhceph ceph, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Oct 14 04:23:25 localhost friendly_keldysh[68300]: 167 167 Oct 14 04:23:25 localhost systemd[1]: libpod-69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9.scope: Deactivated successfully. Oct 14 04:23:25 localhost podman[68285]: 2025-10-14 08:23:25.518396452 +0000 UTC m=+0.163144850 container died 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, release=553) Oct 14 04:23:25 localhost podman[68305]: 2025-10-14 08:23:25.620759623 +0000 UTC m=+0.092387928 container remove 69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_keldysh, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 14 04:23:25 localhost systemd[1]: libpod-conmon-69b07e3afa7472821f5e3f440f75a1ff7ddad66f113e2cef6bc0c08a8fa385a9.scope: Deactivated successfully. Oct 14 04:23:25 localhost podman[68373]: Oct 14 04:23:25 localhost podman[68373]: 2025-10-14 08:23:25.859368907 +0000 UTC m=+0.085265577 container create b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553) Oct 14 04:23:25 localhost systemd[1]: Started libpod-conmon-b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f.scope. Oct 14 04:23:25 localhost systemd[1]: Started libcrun container. Oct 14 04:23:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee665316801db0e51d9656748283a76a9aab5ed52b2bcb1bbcfb048682327d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee665316801db0e51d9656748283a76a9aab5ed52b2bcb1bbcfb048682327d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee665316801db0e51d9656748283a76a9aab5ed52b2bcb1bbcfb048682327d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:25 localhost python3[68369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:25 localhost podman[68373]: 2025-10-14 08:23:25.916741333 +0000 UTC m=+0.142638033 container init b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 04:23:25 localhost podman[68373]: 2025-10-14 08:23:25.927558331 +0000 UTC m=+0.153455041 container start b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, ceph=True, name=rhceph, maintainer=Guillaume Abrioux ) Oct 14 04:23:25 localhost podman[68373]: 2025-10-14 08:23:25.92790454 +0000 UTC m=+0.153801290 container attach b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55) Oct 14 04:23:25 localhost podman[68373]: 2025-10-14 08:23:25.835596206 +0000 UTC m=+0.061492946 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 04:23:26 localhost python3[68411]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp4sb75sd7 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:23:26 localhost systemd[1]: var-lib-containers-storage-overlay-d6e6f22978e5d559e8640b4e6f8b55023bde9fe03d0c7148372c3a1427812173-merged.mount: Deactivated successfully. Oct 14 04:23:26 localhost python3[68703]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:26 localhost eloquent_franklin[68389]: [ Oct 14 04:23:26 localhost eloquent_franklin[68389]: { Oct 14 04:23:26 localhost eloquent_franklin[68389]: "available": false, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "ceph_device": false, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "lsm_data": {}, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "lvs": [], Oct 14 04:23:26 localhost eloquent_franklin[68389]: "path": "/dev/sr0", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "rejected_reasons": [ Oct 14 04:23:26 localhost eloquent_franklin[68389]: "Has a FileSystem", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "Insufficient space (<5GB)" Oct 14 04:23:26 localhost eloquent_franklin[68389]: ], Oct 14 04:23:26 localhost eloquent_franklin[68389]: "sys_api": { Oct 14 04:23:26 localhost eloquent_franklin[68389]: "actuators": null, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "device_nodes": "sr0", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "human_readable_size": "482.00 KB", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "id_bus": "ata", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "model": "QEMU DVD-ROM", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "nr_requests": "2", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "partitions": {}, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "path": "/dev/sr0", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "removable": "1", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "rev": "2.5+", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "ro": "0", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "rotational": "1", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "sas_address": "", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "sas_device_handle": "", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "scheduler_mode": "mq-deadline", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "sectors": 0, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "sectorsize": "2048", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "size": 493568.0, Oct 14 04:23:26 localhost eloquent_franklin[68389]: "support_discard": "0", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "type": "disk", Oct 14 04:23:26 localhost eloquent_franklin[68389]: "vendor": "QEMU" Oct 14 04:23:26 localhost eloquent_franklin[68389]: } Oct 14 04:23:26 localhost eloquent_franklin[68389]: } Oct 14 04:23:26 localhost eloquent_franklin[68389]: ] Oct 14 04:23:26 localhost systemd[1]: libpod-b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f.scope: Deactivated successfully. Oct 14 04:23:26 localhost systemd[1]: libpod-b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f.scope: Consumed 1.035s CPU time. Oct 14 04:23:26 localhost podman[68373]: 2025-10-14 08:23:26.936625681 +0000 UTC m=+1.162522361 container died b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, release=553, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.) Oct 14 04:23:26 localhost systemd[1]: tmp-crun.F24fhU.mount: Deactivated successfully. Oct 14 04:23:27 localhost systemd[1]: var-lib-containers-storage-overlay-ee665316801db0e51d9656748283a76a9aab5ed52b2bcb1bbcfb048682327d1a-merged.mount: Deactivated successfully. Oct 14 04:23:27 localhost podman[70279]: 2025-10-14 08:23:27.018381084 +0000 UTC m=+0.069128169 container remove b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_franklin, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public) Oct 14 04:23:27 localhost systemd[1]: libpod-conmon-b385a60d382a629a4e08165c697b6fa0b94e934c5fe3247491070fd33f9a0f5f.scope: Deactivated successfully. Oct 14 04:23:27 localhost python3[70393]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 14 04:23:28 localhost ansible-async_wrapper.py[66586]: Done in kid B. Oct 14 04:23:28 localhost python3[70412]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:29 localhost python3[70444]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:30 localhost python3[70494]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:30 localhost python3[70512]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:31 localhost python3[70574]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:23:31 localhost systemd[1]: tmp-crun.hZHo6v.mount: Deactivated successfully. Oct 14 04:23:31 localhost podman[70593]: 2025-10-14 08:23:31.393623108 +0000 UTC m=+0.088867424 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., container_name=collectd, release=2, build-date=2025-07-21T13:04:03, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 14 04:23:31 localhost podman[70593]: 2025-10-14 08:23:31.408159895 +0000 UTC m=+0.103404201 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, release=2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:23:31 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:23:31 localhost python3[70592]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:32 localhost python3[70675]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:23:32 localhost podman[70694]: 2025-10-14 08:23:32.347234304 +0000 UTC m=+0.080743588 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:23:32 localhost podman[70694]: 2025-10-14 08:23:32.38317808 +0000 UTC m=+0.116687324 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid) Oct 14 04:23:32 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:23:32 localhost python3[70693]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:33 localhost python3[70774]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:33 localhost python3[70792]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:33 localhost python3[70822]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:33 localhost systemd[1]: Reloading. Oct 14 04:23:33 localhost systemd-rc-local-generator[70847]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:33 localhost systemd-sysv-generator[70852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:34 localhost python3[70908]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:34 localhost python3[70926]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:35 localhost python3[70988]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:23:35 localhost python3[71006]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:36 localhost python3[71036]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:36 localhost systemd[1]: Reloading. Oct 14 04:23:36 localhost systemd-rc-local-generator[71060]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:36 localhost systemd-sysv-generator[71063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:36 localhost systemd[1]: Starting Create netns directory... Oct 14 04:23:36 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 04:23:36 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 04:23:36 localhost systemd[1]: Finished Create netns directory. Oct 14 04:23:37 localhost python3[71094]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:23:37 localhost sshd[71095]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:23:39 localhost python3[71155]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:23:39 localhost podman[71286]: 2025-10-14 08:23:39.359984537 +0000 UTC m=+0.090830917 container create 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true) Oct 14 04:23:39 localhost podman[71318]: 2025-10-14 08:23:39.391902585 +0000 UTC m=+0.076576337 container create 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:23:39 localhost systemd[1]: Started libpod-conmon-1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.scope. Oct 14 04:23:39 localhost podman[71286]: 2025-10-14 08:23:39.30597335 +0000 UTC m=+0.036819750 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 14 04:23:39 localhost podman[71302]: 2025-10-14 08:23:39.412167544 +0000 UTC m=+0.127809539 container create b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-07-21T13:28:44, container_name=configure_cms_options, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, release=1, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 14 04:23:39 localhost podman[71302]: 2025-10-14 08:23:39.317198769 +0000 UTC m=+0.032840784 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:23:39 localhost podman[71330]: 2025-10-14 08:23:39.418952145 +0000 UTC m=+0.104854310 container create f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9) Oct 14 04:23:39 localhost systemd[1]: Started libcrun container. Oct 14 04:23:39 localhost systemd[1]: Started libpod-conmon-1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.scope. Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc06e989b61b0623172ed8f6228aeadb5ab4e2033fa5c722e42cb9029cc166b7/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost systemd[1]: Started libcrun container. Oct 14 04:23:39 localhost podman[71318]: 2025-10-14 08:23:39.347272078 +0000 UTC m=+0.031945840 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7568b0c1b8802be3535f9c50fed9171f7f66ae1eaebd8b147d74d0e23471f5e/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost systemd[1]: Started libpod-conmon-f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.scope. Oct 14 04:23:39 localhost podman[71330]: 2025-10-14 08:23:39.356983416 +0000 UTC m=+0.042885581 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.468970954 +0000 UTC m=+0.082900555 container create 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=2, build-date=2025-07-21T14:56:59, container_name=nova_libvirt_init_secret, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc.) Oct 14 04:23:39 localhost systemd[1]: Started libcrun container. Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141f8240b493de051d128d8af481e4eecafe4083c7fc86019e21768efb6df1ea/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:23:39 localhost podman[71286]: 2025-10-14 08:23:39.493130127 +0000 UTC m=+0.223976507 container init 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, release=1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:23:39 localhost systemd[1]: Started libpod-conmon-0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139.scope. Oct 14 04:23:39 localhost systemd[1]: Started libcrun container. Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.429159525 +0000 UTC m=+0.043089156 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28793152a1a08ef6d85a0f8369b6de4304acf0fcefe34329896abb9348d5919/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28793152a1a08ef6d85a0f8369b6de4304acf0fcefe34329896abb9348d5919/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28793152a1a08ef6d85a0f8369b6de4304acf0fcefe34329896abb9348d5919/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:23:39 localhost podman[71286]: 2025-10-14 08:23:39.532022011 +0000 UTC m=+0.262868391 container start 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, tcib_managed=true, build-date=2025-07-21T13:07:52, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 14 04:23:39 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.542612863 +0000 UTC m=+0.156542464 container init 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., release=2, build-date=2025-07-21T14:56:59, container_name=nova_libvirt_init_secret, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 04:23:39 localhost podman[71318]: 2025-10-14 08:23:39.543831914 +0000 UTC m=+0.228505676 container init 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Oct 14 04:23:39 localhost podman[71330]: 2025-10-14 08:23:39.544394259 +0000 UTC m=+0.230296424 container init f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.openshift.expose-services=) Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.555533815 +0000 UTC m=+0.169463416 container start 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, tcib_managed=true, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, build-date=2025-07-21T14:56:59, version=17.1.9, release=2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., vcs-type=git) Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.556177123 +0000 UTC m=+0.170106744 container attach 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, io.openshift.expose-services=, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:23:39 localhost systemd[1]: Started libpod-conmon-b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754.scope. Oct 14 04:23:39 localhost podman[71318]: 2025-10-14 08:23:39.579218876 +0000 UTC m=+0.263892628 container start 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1) Oct 14 04:23:39 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6fab081f94b3dd479fa1fef3dbed1d07 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 14 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:23:39 localhost systemd[1]: Started libcrun container. Oct 14 04:23:39 localhost podman[71330]: 2025-10-14 08:23:39.659736677 +0000 UTC m=+0.345638842 container start f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi) Oct 14 04:23:39 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6fab081f94b3dd479fa1fef3dbed1d07 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 14 04:23:39 localhost podman[71423]: 2025-10-14 08:23:39.719845935 +0000 UTC m=+0.145380537 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, release=1, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 04:23:39 localhost systemd[1]: libpod-0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139.scope: Deactivated successfully. Oct 14 04:23:39 localhost podman[71302]: 2025-10-14 08:23:39.831188226 +0000 UTC m=+0.546830241 container init b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, config_id=tripleo_step4, container_name=configure_cms_options, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 14 04:23:39 localhost podman[71302]: 2025-10-14 08:23:39.842332092 +0000 UTC m=+0.557974087 container start b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=configure_cms_options, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:23:39 localhost podman[71302]: 2025-10-14 08:23:39.845539767 +0000 UTC m=+0.561181832 container attach b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.9, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public) Oct 14 04:23:39 localhost podman[71447]: 2025-10-14 08:23:39.872913415 +0000 UTC m=+0.265919092 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public) Oct 14 04:23:39 localhost podman[71401]: 2025-10-14 08:23:39.85167179 +0000 UTC m=+0.306585563 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 14 04:23:39 localhost podman[71447]: 2025-10-14 08:23:39.910969736 +0000 UTC m=+0.303975403 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 14 04:23:39 localhost podman[71447]: unhealthy Oct 14 04:23:39 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:23:39 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed with result 'exit-code'. Oct 14 04:23:39 localhost podman[71401]: 2025-10-14 08:23:39.937171114 +0000 UTC m=+0.392084917 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, version=17.1.9) Oct 14 04:23:39 localhost podman[71358]: 2025-10-14 08:23:39.943357038 +0000 UTC m=+0.557286669 container died 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, release=2, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step4) Oct 14 04:23:39 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:23:40 localhost ovs-vsctl[71564]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Oct 14 04:23:40 localhost podman[71423]: 2025-10-14 08:23:40.053093176 +0000 UTC m=+0.478627778 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, container_name=ceilometer_agent_compute) Oct 14 04:23:40 localhost podman[71423]: unhealthy Oct 14 04:23:40 localhost systemd[1]: libpod-b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754.scope: Deactivated successfully. Oct 14 04:23:40 localhost podman[71302]: 2025-10-14 08:23:40.064620902 +0000 UTC m=+0.780262907 container died b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64) Oct 14 04:23:40 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:23:40 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Failed with result 'exit-code'. Oct 14 04:23:40 localhost podman[71581]: 2025-10-14 08:23:40.145335258 +0000 UTC m=+0.069455978 container cleanup b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1) Oct 14 04:23:40 localhost systemd[1]: libpod-conmon-b0d1522f1121914c9b89eeaf589dae420729d3fbc271b8369b803f40e845d754.scope: Deactivated successfully. Oct 14 04:23:40 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Oct 14 04:23:40 localhost podman[71513]: 2025-10-14 08:23:40.179018384 +0000 UTC m=+0.429403018 container cleanup 0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 04:23:40 localhost systemd[1]: libpod-conmon-0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139.scope: Deactivated successfully. Oct 14 04:23:40 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Oct 14 04:23:40 localhost podman[71658]: 2025-10-14 08:23:40.374577914 +0000 UTC m=+0.136761878 container create 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:23:40 localhost podman[71658]: 2025-10-14 08:23:40.331143668 +0000 UTC m=+0.093327652 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:23:40 localhost podman[71691]: 2025-10-14 08:23:40.429248697 +0000 UTC m=+0.125089546 container create d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, vcs-type=git, release=1, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 14 04:23:40 localhost systemd[1]: Started libpod-conmon-5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.scope. Oct 14 04:23:40 localhost systemd[1]: Started libcrun container. Oct 14 04:23:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4021d20142192293b753d5aa3904830cf887c958e51a03d916a4726fdc448e46/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:40 localhost podman[71691]: 2025-10-14 08:23:40.357343526 +0000 UTC m=+0.053184375 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 14 04:23:40 localhost systemd[1]: Started libpod-conmon-d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f.scope. Oct 14 04:23:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:23:40 localhost podman[71658]: 2025-10-14 08:23:40.493519466 +0000 UTC m=+0.255703430 container init 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container) Oct 14 04:23:40 localhost systemd[1]: Started libcrun container. Oct 14 04:23:40 localhost podman[71691]: 2025-10-14 08:23:40.512341347 +0000 UTC m=+0.208182156 container init d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, container_name=setup_ovs_manager, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, architecture=x86_64, release=1, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:23:40 localhost podman[71691]: 2025-10-14 08:23:40.52188702 +0000 UTC m=+0.217727829 container start d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=setup_ovs_manager, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64) Oct 14 04:23:40 localhost podman[71691]: 2025-10-14 08:23:40.522085175 +0000 UTC m=+0.217926014 container attach d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=setup_ovs_manager, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Oct 14 04:23:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:23:40 localhost podman[71658]: 2025-10-14 08:23:40.531432454 +0000 UTC m=+0.293616418 container start 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible) Oct 14 04:23:40 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:23:40 localhost podman[71740]: 2025-10-14 08:23:40.625859664 +0000 UTC m=+0.083701225 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true) Oct 14 04:23:40 localhost podman[71740]: 2025-10-14 08:23:40.966458091 +0000 UTC m=+0.424299692 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:23:40 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:23:41 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Oct 14 04:23:43 localhost ovs-vsctl[71912]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Oct 14 04:23:43 localhost systemd[1]: libpod-d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f.scope: Deactivated successfully. Oct 14 04:23:43 localhost systemd[1]: libpod-d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f.scope: Consumed 3.023s CPU time. Oct 14 04:23:43 localhost podman[71691]: 2025-10-14 08:23:43.54455447 +0000 UTC m=+3.240395379 container died d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:23:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f-userdata-shm.mount: Deactivated successfully. Oct 14 04:23:43 localhost systemd[1]: var-lib-containers-storage-overlay-8fb3dc6bf81a95cfcd70e4022b330b89375474ef10a51fbbe80fad5539619909-merged.mount: Deactivated successfully. Oct 14 04:23:43 localhost podman[71913]: 2025-10-14 08:23:43.65286756 +0000 UTC m=+0.096185778 container cleanup d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1, vcs-type=git, io.openshift.expose-services=, container_name=setup_ovs_manager, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 14 04:23:43 localhost systemd[1]: libpod-conmon-d34e4943ef22fe56d80aa1782825abb169226c14bd5c026165d4e4656d88942f.scope: Deactivated successfully. Oct 14 04:23:43 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760428406 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760428406'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Oct 14 04:23:44 localhost podman[72025]: 2025-10-14 08:23:44.182691798 +0000 UTC m=+0.113726985 container create 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:23:44 localhost podman[72026]: 2025-10-14 08:23:44.19970239 +0000 UTC m=+0.126275969 container create 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:28:53, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 14 04:23:44 localhost podman[72026]: 2025-10-14 08:23:44.119254751 +0000 UTC m=+0.045828400 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 14 04:23:44 localhost podman[72025]: 2025-10-14 08:23:44.120285969 +0000 UTC m=+0.051321246 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:23:44 localhost systemd[1]: Started libpod-conmon-403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.scope. Oct 14 04:23:44 localhost systemd[1]: Started libpod-conmon-9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.scope. Oct 14 04:23:44 localhost systemd[1]: Started libcrun container. Oct 14 04:23:44 localhost systemd[1]: Started libcrun container. Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2659ef36954d83ebad031f4d14eeae08e60b1f17aa34c32cb449aad821b207/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a786747d6feeb3f247951c727a866692741e8c0e2a628920395caa23adc45e/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a786747d6feeb3f247951c727a866692741e8c0e2a628920395caa23adc45e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2659ef36954d83ebad031f4d14eeae08e60b1f17aa34c32cb449aad821b207/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2659ef36954d83ebad031f4d14eeae08e60b1f17aa34c32cb449aad821b207/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a786747d6feeb3f247951c727a866692741e8c0e2a628920395caa23adc45e/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:23:44 localhost podman[72026]: 2025-10-14 08:23:44.301781985 +0000 UTC m=+0.228355584 container init 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent) Oct 14 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:23:44 localhost podman[72026]: 2025-10-14 08:23:44.355551504 +0000 UTC m=+0.282125083 container start 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:23:44 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b594b6ed5677fe328472ea80ffe520cb --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 14 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:23:44 localhost podman[72025]: 2025-10-14 08:23:44.375835353 +0000 UTC m=+0.306870630 container init 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:23:44 localhost podman[72025]: 2025-10-14 08:23:44.42160086 +0000 UTC m=+0.352636087 container start 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, build-date=2025-07-21T13:28:44, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible) Oct 14 04:23:44 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:23:44 localhost python3[71155]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 14 04:23:44 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 04:23:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 04:23:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 04:23:44 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 04:23:44 localhost podman[72065]: 2025-10-14 08:23:44.617819677 +0000 UTC m=+0.249169756 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible) Oct 14 04:23:44 localhost podman[72089]: 2025-10-14 08:23:44.539780223 +0000 UTC m=+0.115500123 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, version=17.1.9, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1) Oct 14 04:23:44 localhost systemd[72111]: Queued start job for default target Main User Target. Oct 14 04:23:44 localhost systemd[72111]: Created slice User Application Slice. Oct 14 04:23:44 localhost systemd[72111]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 04:23:44 localhost systemd[72111]: Started Daily Cleanup of User's Temporary Directories. Oct 14 04:23:44 localhost systemd[72111]: Reached target Paths. Oct 14 04:23:44 localhost systemd[72111]: Reached target Timers. Oct 14 04:23:44 localhost systemd[72111]: Starting D-Bus User Message Bus Socket... Oct 14 04:23:44 localhost systemd[72111]: Starting Create User's Volatile Files and Directories... Oct 14 04:23:44 localhost systemd[72111]: Listening on D-Bus User Message Bus Socket. Oct 14 04:23:44 localhost systemd[72111]: Reached target Sockets. Oct 14 04:23:44 localhost systemd[72111]: Finished Create User's Volatile Files and Directories. Oct 14 04:23:44 localhost podman[72089]: 2025-10-14 08:23:44.669288736 +0000 UTC m=+0.245008646 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, distribution-scope=public, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1) Oct 14 04:23:44 localhost systemd[72111]: Reached target Basic System. Oct 14 04:23:44 localhost systemd[72111]: Reached target Main User Target. Oct 14 04:23:44 localhost systemd[72111]: Startup finished in 149ms. Oct 14 04:23:44 localhost systemd[1]: Started User Manager for UID 0. Oct 14 04:23:44 localhost podman[72089]: unhealthy Oct 14 04:23:44 localhost systemd[1]: Started Session c9 of User root. Oct 14 04:23:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:23:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:23:44 localhost podman[72065]: 2025-10-14 08:23:44.692430201 +0000 UTC m=+0.323780310 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:23:44 localhost podman[72065]: unhealthy Oct 14 04:23:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:23:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:23:44 localhost systemd[1]: session-c9.scope: Deactivated successfully. Oct 14 04:23:44 localhost kernel: device br-int entered promiscuous mode Oct 14 04:23:44 localhost NetworkManager[5972]: [1760430224.7984] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Oct 14 04:23:44 localhost systemd-udevd[72182]: Network interface NamePolicy= disabled on kernel command line. Oct 14 04:23:44 localhost systemd-udevd[72186]: Network interface NamePolicy= disabled on kernel command line. Oct 14 04:23:44 localhost kernel: device genev_sys_6081 entered promiscuous mode Oct 14 04:23:44 localhost NetworkManager[5972]: [1760430224.8315] device (genev_sys_6081): carrier: link connected Oct 14 04:23:44 localhost NetworkManager[5972]: [1760430224.8320] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Oct 14 04:23:45 localhost python3[72205]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:45 localhost python3[72221]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:45 localhost python3[72237]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:23:45 localhost systemd[1]: tmp-crun.mElEta.mount: Deactivated successfully. Oct 14 04:23:45 localhost podman[72254]: 2025-10-14 08:23:45.969283332 +0000 UTC m=+0.109781320 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_id=tripleo_step1, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:23:46 localhost python3[72253]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:46 localhost podman[72254]: 2025-10-14 08:23:46.16215423 +0000 UTC m=+0.302652218 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1) Oct 14 04:23:46 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:23:46 localhost python3[72299]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:46 localhost python3[72317]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:46 localhost python3[72333]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:47 localhost python3[72351]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:47 localhost python3[72369]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:47 localhost python3[72385]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:47 localhost python3[72401]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:48 localhost python3[72417]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:23:48 localhost python3[72478]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:49 localhost python3[72507]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:49 localhost python3[72536]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:50 localhost python3[72565]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:50 localhost python3[72594]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:51 localhost python3[72623]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430228.188432-110077-197094475449630/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:23:51 localhost python3[72639]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 04:23:51 localhost systemd[1]: Reloading. Oct 14 04:23:51 localhost systemd-sysv-generator[72668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:51 localhost systemd-rc-local-generator[72661]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:52 localhost python3[72690]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:52 localhost systemd[1]: Reloading. Oct 14 04:23:52 localhost systemd-sysv-generator[72722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:52 localhost systemd-rc-local-generator[72719]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:53 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 14 04:23:53 localhost tripleo-start-podman-container[72731]: Creating additional drop-in dependency for "ceilometer_agent_compute" (1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8) Oct 14 04:23:53 localhost systemd[1]: Reloading. Oct 14 04:23:53 localhost systemd-rc-local-generator[72784]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:53 localhost systemd-sysv-generator[72787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:53 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 14 04:23:54 localhost python3[72814]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:54 localhost systemd[1]: Reloading. Oct 14 04:23:54 localhost systemd-rc-local-generator[72842]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:54 localhost systemd-sysv-generator[72846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:54 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Oct 14 04:23:54 localhost systemd[1]: Started ceilometer_agent_ipmi container. Oct 14 04:23:54 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 04:23:54 localhost systemd[72111]: Activating special unit Exit the Session... Oct 14 04:23:54 localhost systemd[72111]: Stopped target Main User Target. Oct 14 04:23:54 localhost systemd[72111]: Stopped target Basic System. Oct 14 04:23:54 localhost systemd[72111]: Stopped target Paths. Oct 14 04:23:54 localhost systemd[72111]: Stopped target Sockets. Oct 14 04:23:54 localhost systemd[72111]: Stopped target Timers. Oct 14 04:23:54 localhost systemd[72111]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 04:23:54 localhost systemd[72111]: Closed D-Bus User Message Bus Socket. Oct 14 04:23:54 localhost systemd[72111]: Stopped Create User's Volatile Files and Directories. Oct 14 04:23:54 localhost systemd[72111]: Removed slice User Application Slice. Oct 14 04:23:54 localhost systemd[72111]: Reached target Shutdown. Oct 14 04:23:54 localhost systemd[72111]: Finished Exit the Session. Oct 14 04:23:54 localhost systemd[72111]: Reached target Exit the Session. Oct 14 04:23:54 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 04:23:54 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 04:23:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 04:23:54 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 04:23:54 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 04:23:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 04:23:54 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 04:23:55 localhost python3[72882]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:55 localhost systemd[1]: Reloading. Oct 14 04:23:55 localhost systemd-sysv-generator[72913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:55 localhost systemd-rc-local-generator[72910]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:56 localhost systemd[1]: Starting logrotate_crond container... Oct 14 04:23:56 localhost systemd[1]: Started logrotate_crond container. Oct 14 04:23:56 localhost python3[72949]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:56 localhost systemd[1]: Reloading. Oct 14 04:23:56 localhost systemd-rc-local-generator[72978]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:56 localhost systemd-sysv-generator[72982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:57 localhost systemd[1]: Starting nova_migration_target container... Oct 14 04:23:57 localhost systemd[1]: Started nova_migration_target container. Oct 14 04:23:57 localhost python3[73017]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:57 localhost systemd[1]: Reloading. Oct 14 04:23:58 localhost systemd-sysv-generator[73050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:58 localhost systemd-rc-local-generator[73046]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:58 localhost systemd[1]: Starting ovn_controller container... Oct 14 04:23:58 localhost tripleo-start-podman-container[73057]: Creating additional drop-in dependency for "ovn_controller" (403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17) Oct 14 04:23:58 localhost systemd[1]: Reloading. Oct 14 04:23:58 localhost systemd-rc-local-generator[73116]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:58 localhost systemd-sysv-generator[73119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:23:58 localhost systemd[1]: Started ovn_controller container. Oct 14 04:23:59 localhost python3[73140]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:23:59 localhost systemd[1]: Reloading. Oct 14 04:23:59 localhost systemd-rc-local-generator[73162]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:23:59 localhost systemd-sysv-generator[73166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:23:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:24:00 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 14 04:24:00 localhost systemd[1]: Started ovn_metadata_agent container. Oct 14 04:24:00 localhost python3[73220]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:24:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:24:01 localhost systemd[1]: tmp-crun.Y0KFlH.mount: Deactivated successfully. Oct 14 04:24:01 localhost podman[73277]: 2025-10-14 08:24:01.561294519 +0000 UTC m=+0.103092083 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Oct 14 04:24:01 localhost podman[73277]: 2025-10-14 08:24:01.574970442 +0000 UTC m=+0.116768016 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, release=2, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 14 04:24:01 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:24:02 localhost python3[73362]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005486731 step=4 update_config_hash_only=False Oct 14 04:24:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:24:02 localhost podman[73363]: 2025-10-14 08:24:02.528690131 +0000 UTC m=+0.072197671 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:24:02 localhost podman[73363]: 2025-10-14 08:24:02.567170944 +0000 UTC m=+0.110678464 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Oct 14 04:24:02 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:24:02 localhost python3[73394]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:24:03 localhost python3[73414]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:24:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:24:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:24:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:24:10 localhost podman[73418]: 2025-10-14 08:24:10.551215953 +0000 UTC m=+0.087894989 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9) Oct 14 04:24:10 localhost podman[73418]: 2025-10-14 08:24:10.597447191 +0000 UTC m=+0.134126187 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 14 04:24:10 localhost systemd[1]: tmp-crun.yNsExt.mount: Deactivated successfully. Oct 14 04:24:10 localhost podman[73417]: 2025-10-14 08:24:10.612945624 +0000 UTC m=+0.151955661 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, vcs-type=git, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:24:10 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:24:10 localhost podman[73419]: 2025-10-14 08:24:10.711262048 +0000 UTC m=+0.244674736 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4) Oct 14 04:24:10 localhost podman[73417]: 2025-10-14 08:24:10.728056834 +0000 UTC m=+0.267066881 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public) Oct 14 04:24:10 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:24:10 localhost podman[73419]: 2025-10-14 08:24:10.755153605 +0000 UTC m=+0.288566323 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64) Oct 14 04:24:10 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:24:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:24:11 localhost podman[73489]: 2025-10-14 08:24:11.548593152 +0000 UTC m=+0.086892561 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:24:11 localhost podman[73489]: 2025-10-14 08:24:11.918316552 +0000 UTC m=+0.456616001 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 14 04:24:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:24:15 localhost systemd[1]: tmp-crun.0hyJqb.mount: Deactivated successfully. Oct 14 04:24:15 localhost podman[73512]: 2025-10-14 08:24:15.550044747 +0000 UTC m=+0.092222173 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, version=17.1.9, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=) Oct 14 04:24:15 localhost systemd[1]: tmp-crun.jW4pAG.mount: Deactivated successfully. Oct 14 04:24:15 localhost podman[73513]: 2025-10-14 08:24:15.603826897 +0000 UTC m=+0.141113613 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:24:15 localhost podman[73512]: 2025-10-14 08:24:15.631324038 +0000 UTC m=+0.173501474 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:24:15 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:24:15 localhost podman[73513]: 2025-10-14 08:24:15.749524621 +0000 UTC m=+0.286811277 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 04:24:15 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:24:16 localhost podman[73559]: 2025-10-14 08:24:16.540191485 +0000 UTC m=+0.084197459 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc.) Oct 14 04:24:16 localhost podman[73559]: 2025-10-14 08:24:16.726121818 +0000 UTC m=+0.270127802 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Oct 14 04:24:16 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:24:23 localhost snmpd[68028]: empty variable list in _query Oct 14 04:24:23 localhost snmpd[68028]: empty variable list in _query Oct 14 04:24:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:24:32 localhost podman[73664]: 2025-10-14 08:24:32.559268466 +0000 UTC m=+0.090578648 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:24:32 localhost podman[73664]: 2025-10-14 08:24:32.571047419 +0000 UTC m=+0.102357661 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=2, io.buildah.version=1.33.12, config_id=tripleo_step3, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 14 04:24:32 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:24:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:24:32 localhost systemd[1]: tmp-crun.pASP0y.mount: Deactivated successfully. Oct 14 04:24:32 localhost podman[73682]: 2025-10-14 08:24:32.699110555 +0000 UTC m=+0.081334904 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-iscsid) Oct 14 04:24:32 localhost podman[73682]: 2025-10-14 08:24:32.738262136 +0000 UTC m=+0.120486405 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:24:32 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:24:41 localhost podman[73702]: 2025-10-14 08:24:41.561810855 +0000 UTC m=+0.099693462 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_compute) Oct 14 04:24:41 localhost podman[73703]: 2025-10-14 08:24:41.607008447 +0000 UTC m=+0.140974781 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond) Oct 14 04:24:41 localhost podman[73703]: 2025-10-14 08:24:41.614775531 +0000 UTC m=+0.148741865 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 14 04:24:41 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:24:41 localhost podman[73704]: 2025-10-14 08:24:41.661962837 +0000 UTC m=+0.191766060 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:29:47, architecture=x86_64, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc.) Oct 14 04:24:41 localhost podman[73702]: 2025-10-14 08:24:41.666690242 +0000 UTC m=+0.204572859 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:24:41 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:24:41 localhost podman[73704]: 2025-10-14 08:24:41.692853841 +0000 UTC m=+0.222657024 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, version=17.1.9) Oct 14 04:24:41 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:24:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:24:42 localhost podman[73773]: 2025-10-14 08:24:42.536042907 +0000 UTC m=+0.075519173 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:24:42 localhost podman[73773]: 2025-10-14 08:24:42.917250634 +0000 UTC m=+0.456726840 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:24:42 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:24:46 localhost podman[73795]: 2025-10-14 08:24:46.544883202 +0000 UTC m=+0.081687046 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vcs-type=git) Oct 14 04:24:46 localhost podman[73795]: 2025-10-14 08:24:46.601214778 +0000 UTC m=+0.138018602 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:24:46 localhost podman[73796]: 2025-10-14 08:24:46.615443164 +0000 UTC m=+0.148434787 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team) Oct 14 04:24:46 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:24:46 localhost podman[73796]: 2025-10-14 08:24:46.690414591 +0000 UTC m=+0.223406144 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 14 04:24:46 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:24:47 localhost podman[73841]: 2025-10-14 08:24:47.544903565 +0000 UTC m=+0.086526664 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64) Oct 14 04:24:47 localhost podman[73841]: 2025-10-14 08:24:47.746153845 +0000 UTC m=+0.287776954 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1) Oct 14 04:24:47 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:25:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:25:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:25:03 localhost systemd[1]: tmp-crun.DyELhY.mount: Deactivated successfully. Oct 14 04:25:03 localhost podman[73872]: 2025-10-14 08:25:03.540394962 +0000 UTC m=+0.079299493 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, release=2, config_id=tripleo_step3, distribution-scope=public, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, maintainer=OpenStack TripleO Team) Oct 14 04:25:03 localhost podman[73872]: 2025-10-14 08:25:03.550848218 +0000 UTC m=+0.089752739 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:25:03 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:25:03 localhost podman[73873]: 2025-10-14 08:25:03.602991583 +0000 UTC m=+0.134950371 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:25:03 localhost podman[73873]: 2025-10-14 08:25:03.614151468 +0000 UTC m=+0.146110286 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3) Oct 14 04:25:03 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:25:12 localhost systemd[1]: tmp-crun.O5AxRe.mount: Deactivated successfully. Oct 14 04:25:12 localhost podman[73914]: 2025-10-14 08:25:12.540688977 +0000 UTC m=+0.079495428 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, container_name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:25:12 localhost podman[73914]: 2025-10-14 08:25:12.547339982 +0000 UTC m=+0.086146483 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 14 04:25:12 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:25:12 localhost podman[73915]: 2025-10-14 08:25:12.597598308 +0000 UTC m=+0.128507831 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 14 04:25:12 localhost podman[73913]: 2025-10-14 08:25:12.647267039 +0000 UTC m=+0.185217477 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, release=1) Oct 14 04:25:12 localhost podman[73915]: 2025-10-14 08:25:12.672388152 +0000 UTC m=+0.203297715 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:25:12 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:25:12 localhost podman[73913]: 2025-10-14 08:25:12.726565041 +0000 UTC m=+0.264515469 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:25:12 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:25:13 localhost podman[73986]: 2025-10-14 08:25:13.581325883 +0000 UTC m=+0.084685415 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 14 04:25:13 localhost podman[73986]: 2025-10-14 08:25:13.936925025 +0000 UTC m=+0.440284607 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Oct 14 04:25:13 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:25:17 localhost systemd[1]: tmp-crun.18KyoS.mount: Deactivated successfully. Oct 14 04:25:17 localhost podman[74010]: 2025-10-14 08:25:17.542004937 +0000 UTC m=+0.077289000 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:25:17 localhost systemd[1]: tmp-crun.EQpasw.mount: Deactivated successfully. Oct 14 04:25:17 localhost podman[74009]: 2025-10-14 08:25:17.601768154 +0000 UTC m=+0.139609604 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:25:17 localhost podman[74010]: 2025-10-14 08:25:17.615156987 +0000 UTC m=+0.150441040 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:25:17 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:25:17 localhost podman[74009]: 2025-10-14 08:25:17.657499704 +0000 UTC m=+0.195341164 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, release=1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 04:25:17 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:25:18 localhost podman[74056]: 2025-10-14 08:25:18.601688115 +0000 UTC m=+0.148180961 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc.) Oct 14 04:25:18 localhost podman[74056]: 2025-10-14 08:25:18.794586533 +0000 UTC m=+0.341079389 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:25:18 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:25:30 localhost podman[74186]: 2025-10-14 08:25:30.152495287 +0000 UTC m=+0.102632539 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 04:25:30 localhost podman[74186]: 2025-10-14 08:25:30.247415182 +0000 UTC m=+0.197552414 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 04:25:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:25:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:25:34 localhost podman[74330]: 2025-10-14 08:25:34.567027805 +0000 UTC m=+0.100540603 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 14 04:25:34 localhost podman[74330]: 2025-10-14 08:25:34.610487181 +0000 UTC m=+0.143999939 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, version=17.1.9, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:25:34 localhost podman[74329]: 2025-10-14 08:25:34.613904121 +0000 UTC m=+0.144898263 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, release=2, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc.) Oct 14 04:25:34 localhost podman[74329]: 2025-10-14 08:25:34.627161961 +0000 UTC m=+0.158156173 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:25:34 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:25:34 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:25:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:25:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:25:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:25:43 localhost podman[74368]: 2025-10-14 08:25:43.559611924 +0000 UTC m=+0.093939379 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:25:43 localhost podman[74369]: 2025-10-14 08:25:43.606367117 +0000 UTC m=+0.137996031 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, release=1, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cron) Oct 14 04:25:43 localhost podman[74368]: 2025-10-14 08:25:43.669028171 +0000 UTC m=+0.203355586 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:25:43 localhost podman[74370]: 2025-10-14 08:25:43.665880348 +0000 UTC m=+0.193679592 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:25:43 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:25:43 localhost podman[74369]: 2025-10-14 08:25:43.694745809 +0000 UTC m=+0.226374683 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond) Oct 14 04:25:43 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:25:43 localhost podman[74370]: 2025-10-14 08:25:43.786323896 +0000 UTC m=+0.314123160 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, vcs-type=git) Oct 14 04:25:43 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:25:44 localhost podman[74439]: 2025-10-14 08:25:44.546766738 +0000 UTC m=+0.086570155 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 14 04:25:44 localhost podman[74439]: 2025-10-14 08:25:44.923761355 +0000 UTC m=+0.463564742 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1, container_name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true) Oct 14 04:25:44 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:25:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:25:48 localhost recover_tripleo_nova_virtqemud[74470]: 62532 Oct 14 04:25:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:25:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:25:48 localhost podman[74463]: 2025-10-14 08:25:48.547806036 +0000 UTC m=+0.085879986 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:25:48 localhost podman[74463]: 2025-10-14 08:25:48.594224031 +0000 UTC m=+0.132297991 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:25:48 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:25:48 localhost podman[74462]: 2025-10-14 08:25:48.602008087 +0000 UTC m=+0.144246648 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:25:48 localhost podman[74462]: 2025-10-14 08:25:48.688615512 +0000 UTC m=+0.230854073 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.9, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:25:48 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:25:49 localhost podman[74512]: 2025-10-14 08:25:49.545088817 +0000 UTC m=+0.081460900 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:25:49 localhost podman[74512]: 2025-10-14 08:25:49.734597117 +0000 UTC m=+0.270969180 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:25:49 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:26:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:26:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:26:05 localhost systemd[1]: tmp-crun.ukdO8Y.mount: Deactivated successfully. Oct 14 04:26:05 localhost podman[74540]: 2025-10-14 08:26:05.560722815 +0000 UTC m=+0.098119620 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:04:03, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:26:05 localhost podman[74541]: 2025-10-14 08:26:05.600139805 +0000 UTC m=+0.137613652 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.expose-services=, version=17.1.9) Oct 14 04:26:05 localhost podman[74541]: 2025-10-14 08:26:05.612004358 +0000 UTC m=+0.149478235 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:26:05 localhost podman[74540]: 2025-10-14 08:26:05.624766395 +0000 UTC m=+0.162163170 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:26:05 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:26:05 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:26:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:26:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:26:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:26:14 localhost systemd[1]: tmp-crun.MTeyQV.mount: Deactivated successfully. Oct 14 04:26:14 localhost podman[74580]: 2025-10-14 08:26:14.566531714 +0000 UTC m=+0.099687981 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 14 04:26:14 localhost podman[74579]: 2025-10-14 08:26:14.610288299 +0000 UTC m=+0.146877837 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 14 04:26:14 localhost podman[74581]: 2025-10-14 08:26:14.669156422 +0000 UTC m=+0.200048010 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:26:14 localhost podman[74580]: 2025-10-14 08:26:14.681814645 +0000 UTC m=+0.214970882 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron) Oct 14 04:26:14 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:26:14 localhost podman[74581]: 2025-10-14 08:26:14.700221981 +0000 UTC m=+0.231113569 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, release=1, tcib_managed=true) Oct 14 04:26:14 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:26:14 localhost podman[74579]: 2025-10-14 08:26:14.72255468 +0000 UTC m=+0.259144178 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:26:14 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:26:15 localhost podman[74651]: 2025-10-14 08:26:15.537687976 +0000 UTC m=+0.077396403 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, version=17.1.9, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1) Oct 14 04:26:15 localhost podman[74651]: 2025-10-14 08:26:15.935097941 +0000 UTC m=+0.474806328 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:26:15 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:26:19 localhost systemd[1]: tmp-crun.gXmPq6.mount: Deactivated successfully. Oct 14 04:26:19 localhost podman[74673]: 2025-10-14 08:26:19.546688866 +0000 UTC m=+0.086502873 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=) Oct 14 04:26:19 localhost podman[74672]: 2025-10-14 08:26:19.565919123 +0000 UTC m=+0.105483474 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:26:19 localhost podman[74672]: 2025-10-14 08:26:19.612518782 +0000 UTC m=+0.152083093 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible) Oct 14 04:26:19 localhost podman[74673]: 2025-10-14 08:26:19.613649463 +0000 UTC m=+0.153463480 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1) Oct 14 04:26:19 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:26:19 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:26:20 localhost podman[74719]: 2025-10-14 08:26:20.537522918 +0000 UTC m=+0.081958724 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:26:20 localhost podman[74719]: 2025-10-14 08:26:20.754058271 +0000 UTC m=+0.298494067 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:26:20 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:26:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:26:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:26:36 localhost podman[74827]: 2025-10-14 08:26:36.557800035 +0000 UTC m=+0.088360461 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, container_name=iscsid, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:26:36 localhost podman[74827]: 2025-10-14 08:26:36.57046781 +0000 UTC m=+0.101028266 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, version=17.1.9, container_name=iscsid) Oct 14 04:26:36 localhost podman[74826]: 2025-10-14 08:26:36.631258073 +0000 UTC m=+0.163841323 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 14 04:26:36 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:26:36 localhost podman[74826]: 2025-10-14 08:26:36.639968424 +0000 UTC m=+0.172551624 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, release=2, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 04:26:36 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:26:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:26:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:26:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:26:45 localhost podman[74865]: 2025-10-14 08:26:45.57556433 +0000 UTC m=+0.111085121 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, release=1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:45:33) Oct 14 04:26:45 localhost podman[74867]: 2025-10-14 08:26:45.666838018 +0000 UTC m=+0.197675176 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:26:45 localhost podman[74865]: 2025-10-14 08:26:45.684154675 +0000 UTC m=+0.219675486 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:45:33, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:26:45 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:26:45 localhost podman[74867]: 2025-10-14 08:26:45.700907247 +0000 UTC m=+0.231744425 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 14 04:26:45 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:26:45 localhost podman[74866]: 2025-10-14 08:26:45.775914306 +0000 UTC m=+0.307417432 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, tcib_managed=true) Oct 14 04:26:45 localhost podman[74866]: 2025-10-14 08:26:45.811126044 +0000 UTC m=+0.342629150 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:26:45 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:26:46 localhost podman[74934]: 2025-10-14 08:26:46.540326663 +0000 UTC m=+0.080555586 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 14 04:26:46 localhost podman[74934]: 2025-10-14 08:26:46.90940132 +0000 UTC m=+0.449630233 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, container_name=nova_migration_target, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, release=1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 14 04:26:46 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:26:49 localhost python3[75003]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:26:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:26:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:26:49 localhost podman[75048]: 2025-10-14 08:26:49.847087265 +0000 UTC m=+0.084098100 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44) Oct 14 04:26:49 localhost podman[75049]: 2025-10-14 08:26:49.912309095 +0000 UTC m=+0.144661087 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 14 04:26:49 localhost python3[75050]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430409.115373-114287-202540230119685/source _original_basename=tmpld5jtrps follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:26:49 localhost podman[75048]: 2025-10-14 08:26:49.929204261 +0000 UTC m=+0.166215126 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12) Oct 14 04:26:49 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:26:49 localhost podman[75049]: 2025-10-14 08:26:49.988218108 +0000 UTC m=+0.220570080 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, version=17.1.9, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:26:50 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:26:50 localhost python3[75126]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:26:51 localhost systemd[1]: tmp-crun.7Re3lL.mount: Deactivated successfully. Oct 14 04:26:51 localhost podman[75177]: 2025-10-14 08:26:51.519192569 +0000 UTC m=+0.109052638 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:26:51 localhost podman[75177]: 2025-10-14 08:26:51.752022532 +0000 UTC m=+0.341882541 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:26:51 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:26:52 localhost ansible-async_wrapper.py[75328]: Invoked with 730184921548 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430412.0352204-114414-133663569711228/AnsiballZ_command.py _ Oct 14 04:26:52 localhost ansible-async_wrapper.py[75331]: Starting module and watcher Oct 14 04:26:52 localhost ansible-async_wrapper.py[75331]: Start watching 75332 (3600) Oct 14 04:26:52 localhost ansible-async_wrapper.py[75332]: Start module (75332) Oct 14 04:26:52 localhost ansible-async_wrapper.py[75328]: Return async_wrapper task started. Oct 14 04:26:52 localhost python3[75352]: ansible-ansible.legacy.async_status Invoked with jid=730184921548.75328 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:26:56 localhost puppet-user[75336]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 14 04:26:56 localhost puppet-user[75336]: (file: /etc/puppet/hiera.yaml) Oct 14 04:26:56 localhost puppet-user[75336]: Warning: Undefined variable '::deploy_config_name'; Oct 14 04:26:56 localhost puppet-user[75336]: (file & line not available) Oct 14 04:26:56 localhost puppet-user[75336]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 14 04:26:56 localhost puppet-user[75336]: (file & line not available) Oct 14 04:26:56 localhost puppet-user[75336]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:26:56 localhost puppet-user[75336]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:26:56 localhost puppet-user[75336]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:26:56 localhost puppet-user[75336]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:26:56 localhost puppet-user[75336]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 14 04:26:56 localhost puppet-user[75336]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 14 04:26:56 localhost puppet-user[75336]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 14 04:26:56 localhost puppet-user[75336]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 14 04:26:56 localhost puppet-user[75336]: Notice: Compiled catalog for np0005486731.localdomain in environment production in 0.24 seconds Oct 14 04:26:57 localhost puppet-user[75336]: Notice: Applied catalog in 0.39 seconds Oct 14 04:26:57 localhost puppet-user[75336]: Application: Oct 14 04:26:57 localhost puppet-user[75336]: Initial environment: production Oct 14 04:26:57 localhost puppet-user[75336]: Converged environment: production Oct 14 04:26:57 localhost puppet-user[75336]: Run mode: user Oct 14 04:26:57 localhost puppet-user[75336]: Changes: Oct 14 04:26:57 localhost puppet-user[75336]: Events: Oct 14 04:26:57 localhost puppet-user[75336]: Resources: Oct 14 04:26:57 localhost puppet-user[75336]: Total: 19 Oct 14 04:26:57 localhost puppet-user[75336]: Time: Oct 14 04:26:57 localhost puppet-user[75336]: Filebucket: 0.00 Oct 14 04:26:57 localhost puppet-user[75336]: Package: 0.00 Oct 14 04:26:57 localhost puppet-user[75336]: Schedule: 0.00 Oct 14 04:26:57 localhost puppet-user[75336]: Exec: 0.01 Oct 14 04:26:57 localhost puppet-user[75336]: Augeas: 0.01 Oct 14 04:26:57 localhost puppet-user[75336]: File: 0.02 Oct 14 04:26:57 localhost puppet-user[75336]: Service: 0.07 Oct 14 04:26:57 localhost puppet-user[75336]: Config retrieval: 0.31 Oct 14 04:26:57 localhost puppet-user[75336]: Transaction evaluation: 0.33 Oct 14 04:26:57 localhost puppet-user[75336]: Catalog application: 0.39 Oct 14 04:26:57 localhost puppet-user[75336]: Last run: 1760430417 Oct 14 04:26:57 localhost puppet-user[75336]: Total: 0.39 Oct 14 04:26:57 localhost puppet-user[75336]: Version: Oct 14 04:26:57 localhost puppet-user[75336]: Config: 1760430416 Oct 14 04:26:57 localhost puppet-user[75336]: Puppet: 7.10.0 Oct 14 04:26:57 localhost ansible-async_wrapper.py[75332]: Module complete (75332) Oct 14 04:26:57 localhost ansible-async_wrapper.py[75331]: Done in kid B. Oct 14 04:27:03 localhost python3[75490]: ansible-ansible.legacy.async_status Invoked with jid=730184921548.75328 mode=status _async_dir=/tmp/.ansible_async Oct 14 04:27:04 localhost python3[75506]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:27:04 localhost python3[75522]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:27:04 localhost python3[75572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:05 localhost python3[75590]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpqvptpwr4 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 14 04:27:05 localhost python3[75620]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:27:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:27:07 localhost podman[75710]: 2025-10-14 08:27:07.612632682 +0000 UTC m=+0.145136880 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 04:27:07 localhost podman[75710]: 2025-10-14 08:27:07.624157436 +0000 UTC m=+0.156661634 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, release=2, vendor=Red Hat, Inc.) Oct 14 04:27:07 localhost podman[75711]: 2025-10-14 08:27:07.573195802 +0000 UTC m=+0.105799583 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1, config_id=tripleo_step3, version=17.1.9, container_name=iscsid, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Oct 14 04:27:07 localhost podman[75711]: 2025-10-14 08:27:07.654004264 +0000 UTC m=+0.186608085 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, version=17.1.9, container_name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1, build-date=2025-07-21T13:27:15, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 14 04:27:07 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:27:07 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:27:07 localhost python3[75764]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 14 04:27:08 localhost python3[75783]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:09 localhost python3[75815]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:27:10 localhost python3[75865]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:10 localhost python3[75883]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:10 localhost python3[75945]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:11 localhost python3[75963]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:11 localhost python3[76025]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:12 localhost python3[76043]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:12 localhost python3[76105]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:12 localhost python3[76123]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:13 localhost python3[76153]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:27:13 localhost systemd[1]: Reloading. Oct 14 04:27:13 localhost systemd-sysv-generator[76185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:27:13 localhost systemd-rc-local-generator[76181]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:27:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:27:13 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:27:13 localhost recover_tripleo_nova_virtqemud[76193]: 62532 Oct 14 04:27:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:27:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:27:14 localhost python3[76242]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:14 localhost python3[76260]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:15 localhost python3[76322]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 14 04:27:15 localhost python3[76340]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:27:15 localhost podman[76370]: 2025-10-14 08:27:15.839801008 +0000 UTC m=+0.084690436 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, release=1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 04:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:27:15 localhost podman[76372]: 2025-10-14 08:27:15.881026225 +0000 UTC m=+0.125460200 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team) Oct 14 04:27:15 localhost podman[76370]: 2025-10-14 08:27:15.886659354 +0000 UTC m=+0.131548772 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, release=1) Oct 14 04:27:15 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:27:15 localhost podman[76372]: 2025-10-14 08:27:15.93842339 +0000 UTC m=+0.182857435 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:27:15 localhost podman[76401]: 2025-10-14 08:27:15.968918464 +0000 UTC m=+0.115275372 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 14 04:27:15 localhost podman[76401]: 2025-10-14 08:27:15.980286664 +0000 UTC m=+0.126643642 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Oct 14 04:27:15 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:27:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:27:16 localhost python3[76371]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:27:16 localhost systemd[1]: Reloading. Oct 14 04:27:16 localhost systemd-sysv-generator[76471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:27:16 localhost systemd-rc-local-generator[76464]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:27:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:27:16 localhost systemd[1]: Starting Create netns directory... Oct 14 04:27:16 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 04:27:16 localhost systemd[1]: Finished Create netns directory. Oct 14 04:27:17 localhost python3[76500]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 14 04:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:27:17 localhost systemd[1]: tmp-crun.3iS7iM.mount: Deactivated successfully. Oct 14 04:27:17 localhost podman[76516]: 2025-10-14 08:27:17.550182712 +0000 UTC m=+0.090359685 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Oct 14 04:27:17 localhost podman[76516]: 2025-10-14 08:27:17.968142449 +0000 UTC m=+0.508319402 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1) Oct 14 04:27:17 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:27:18 localhost python3[76580]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 14 04:27:19 localhost podman[76619]: 2025-10-14 08:27:19.244307959 +0000 UTC m=+0.091113456 container create a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 14 04:27:19 localhost systemd[1]: Started libpod-conmon-a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.scope. Oct 14 04:27:19 localhost podman[76619]: 2025-10-14 08:27:19.198285554 +0000 UTC m=+0.045091091 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:27:19 localhost systemd[1]: Started libcrun container. Oct 14 04:27:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:27:19 localhost podman[76619]: 2025-10-14 08:27:19.344562774 +0000 UTC m=+0.191368261 container init a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true) Oct 14 04:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:27:19 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 14 04:27:19 localhost podman[76619]: 2025-10-14 08:27:19.394295635 +0000 UTC m=+0.241101142 container start a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, container_name=nova_compute) Oct 14 04:27:19 localhost python3[76580]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:27:19 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 04:27:19 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 04:27:19 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 04:27:19 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 04:27:19 localhost podman[76640]: 2025-10-14 08:27:19.491775807 +0000 UTC m=+0.096065795 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:27:19 localhost systemd[76658]: Queued start job for default target Main User Target. Oct 14 04:27:19 localhost systemd[76658]: Created slice User Application Slice. Oct 14 04:27:19 localhost systemd[76658]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 04:27:19 localhost systemd[76658]: Started Daily Cleanup of User's Temporary Directories. Oct 14 04:27:19 localhost systemd[76658]: Reached target Paths. Oct 14 04:27:19 localhost systemd[76658]: Reached target Timers. Oct 14 04:27:19 localhost systemd[76658]: Starting D-Bus User Message Bus Socket... Oct 14 04:27:19 localhost systemd[76658]: Starting Create User's Volatile Files and Directories... Oct 14 04:27:19 localhost systemd[76658]: Finished Create User's Volatile Files and Directories. Oct 14 04:27:19 localhost systemd[76658]: Listening on D-Bus User Message Bus Socket. Oct 14 04:27:19 localhost systemd[76658]: Reached target Sockets. Oct 14 04:27:19 localhost systemd[76658]: Reached target Basic System. Oct 14 04:27:19 localhost systemd[76658]: Reached target Main User Target. Oct 14 04:27:19 localhost systemd[76658]: Startup finished in 116ms. Oct 14 04:27:19 localhost systemd[1]: Started User Manager for UID 0. Oct 14 04:27:19 localhost podman[76640]: 2025-10-14 08:27:19.608139318 +0000 UTC m=+0.212429286 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Oct 14 04:27:19 localhost systemd[1]: Started Session c10 of User root. Oct 14 04:27:19 localhost podman[76640]: unhealthy Oct 14 04:27:19 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:27:19 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:27:19 localhost systemd[1]: session-c10.scope: Deactivated successfully. Oct 14 04:27:19 localhost podman[76739]: 2025-10-14 08:27:19.965161176 +0000 UTC m=+0.094633698 container create dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:27:20 localhost systemd[1]: Started libpod-conmon-dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1.scope. Oct 14 04:27:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:27:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:27:20 localhost podman[76739]: 2025-10-14 08:27:19.921218848 +0000 UTC m=+0.050691440 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:27:20 localhost systemd[1]: Started libcrun container. Oct 14 04:27:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 14 04:27:20 localhost podman[76739]: 2025-10-14 08:27:20.048243769 +0000 UTC m=+0.177716301 container init dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, container_name=nova_wait_for_compute_service, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5) Oct 14 04:27:20 localhost podman[76739]: 2025-10-14 08:27:20.060018799 +0000 UTC m=+0.189491341 container start dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:27:20 localhost podman[76739]: 2025-10-14 08:27:20.060396199 +0000 UTC m=+0.189868721 container attach dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, batch=17.1_20250721.1, distribution-scope=public) Oct 14 04:27:20 localhost podman[76756]: 2025-10-14 08:27:20.115451942 +0000 UTC m=+0.087537630 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 14 04:27:20 localhost podman[76754]: 2025-10-14 08:27:20.164854776 +0000 UTC m=+0.140982502 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:27:20 localhost podman[76754]: 2025-10-14 08:27:20.184204025 +0000 UTC m=+0.160331671 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Oct 14 04:27:20 localhost podman[76756]: 2025-10-14 08:27:20.19118964 +0000 UTC m=+0.163275318 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 14 04:27:20 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:27:20 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:27:22 localhost podman[76809]: 2025-10-14 08:27:22.547017183 +0000 UTC m=+0.085384554 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, distribution-scope=public) Oct 14 04:27:22 localhost podman[76809]: 2025-10-14 08:27:22.742776368 +0000 UTC m=+0.281143679 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr) Oct 14 04:27:22 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:27:29 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 04:27:29 localhost systemd[76658]: Activating special unit Exit the Session... Oct 14 04:27:29 localhost systemd[76658]: Stopped target Main User Target. Oct 14 04:27:29 localhost systemd[76658]: Stopped target Basic System. Oct 14 04:27:29 localhost systemd[76658]: Stopped target Paths. Oct 14 04:27:29 localhost systemd[76658]: Stopped target Sockets. Oct 14 04:27:29 localhost systemd[76658]: Stopped target Timers. Oct 14 04:27:29 localhost systemd[76658]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 04:27:29 localhost systemd[76658]: Closed D-Bus User Message Bus Socket. Oct 14 04:27:29 localhost systemd[76658]: Stopped Create User's Volatile Files and Directories. Oct 14 04:27:29 localhost systemd[76658]: Removed slice User Application Slice. Oct 14 04:27:29 localhost systemd[76658]: Reached target Shutdown. Oct 14 04:27:29 localhost systemd[76658]: Finished Exit the Session. Oct 14 04:27:29 localhost systemd[76658]: Reached target Exit the Session. Oct 14 04:27:29 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 04:27:29 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 04:27:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 04:27:29 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 04:27:29 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 04:27:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 04:27:29 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 04:27:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:27:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:27:38 localhost podman[76916]: 2025-10-14 08:27:38.556968252 +0000 UTC m=+0.093749805 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, release=2, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:27:38 localhost podman[76916]: 2025-10-14 08:27:38.59630292 +0000 UTC m=+0.133084433 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 14 04:27:38 localhost podman[76917]: 2025-10-14 08:27:38.605431721 +0000 UTC m=+0.139376219 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:27:38 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:27:38 localhost podman[76917]: 2025-10-14 08:27:38.622586833 +0000 UTC m=+0.156531331 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.expose-services=, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:27:38 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:27:45 localhost systemd[1]: session-27.scope: Deactivated successfully. Oct 14 04:27:45 localhost systemd[1]: session-27.scope: Consumed 3.014s CPU time. Oct 14 04:27:45 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Oct 14 04:27:45 localhost systemd-logind[760]: Removed session 27. Oct 14 04:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:27:46 localhost podman[76954]: 2025-10-14 08:27:46.550104624 +0000 UTC m=+0.090980621 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12) Oct 14 04:27:46 localhost podman[76955]: 2025-10-14 08:27:46.605113366 +0000 UTC m=+0.142787569 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true) Oct 14 04:27:46 localhost podman[76955]: 2025-10-14 08:27:46.615770427 +0000 UTC m=+0.153444630 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, architecture=x86_64, batch=17.1_20250721.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9) Oct 14 04:27:46 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:27:46 localhost podman[76954]: 2025-10-14 08:27:46.659451419 +0000 UTC m=+0.200327496 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:27:46 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:27:46 localhost podman[76956]: 2025-10-14 08:27:46.707885336 +0000 UTC m=+0.243190746 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, version=17.1.9, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:27:46 localhost podman[76956]: 2025-10-14 08:27:46.769667597 +0000 UTC m=+0.304973007 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, release=1) Oct 14 04:27:46 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:27:48 localhost systemd[1]: tmp-crun.P1f82a.mount: Deactivated successfully. Oct 14 04:27:48 localhost podman[77027]: 2025-10-14 08:27:48.557499535 +0000 UTC m=+0.096489237 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git) Oct 14 04:27:48 localhost podman[77027]: 2025-10-14 08:27:48.900181406 +0000 UTC m=+0.439171108 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:27:48 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:27:50 localhost systemd[1]: tmp-crun.eZjvJ4.mount: Deactivated successfully. Oct 14 04:27:50 localhost podman[77049]: 2025-10-14 08:27:50.557249534 +0000 UTC m=+0.083226257 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:27:50 localhost podman[77048]: 2025-10-14 08:27:50.612008138 +0000 UTC m=+0.142688025 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:27:50 localhost podman[77050]: 2025-10-14 08:27:50.664469293 +0000 UTC m=+0.187718054 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 14 04:27:50 localhost podman[77048]: 2025-10-14 08:27:50.667196795 +0000 UTC m=+0.197876642 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 14 04:27:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:27:50 localhost podman[77049]: 2025-10-14 08:27:50.69393594 +0000 UTC m=+0.219912593 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:27:50 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:27:50 localhost podman[77050]: 2025-10-14 08:27:50.717855161 +0000 UTC m=+0.241103942 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12) Oct 14 04:27:50 localhost podman[77050]: unhealthy Oct 14 04:27:50 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:27:50 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:27:53 localhost podman[77118]: 2025-10-14 08:27:53.553008741 +0000 UTC m=+0.090518649 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd) Oct 14 04:27:53 localhost podman[77118]: 2025-10-14 08:27:53.814097999 +0000 UTC m=+0.351607807 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 14 04:27:53 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:28:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:28:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:28:09 localhost podman[77147]: 2025-10-14 08:28:09.605885502 +0000 UTC m=+0.146326201 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, name=rhosp17/openstack-collectd) Oct 14 04:28:09 localhost podman[77147]: 2025-10-14 08:28:09.61564538 +0000 UTC m=+0.156086089 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd) Oct 14 04:28:09 localhost podman[77148]: 2025-10-14 08:28:09.575982824 +0000 UTC m=+0.115056967 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12) Oct 14 04:28:09 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:28:09 localhost podman[77148]: 2025-10-14 08:28:09.686643063 +0000 UTC m=+0.225717276 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9) Oct 14 04:28:09 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:28:17 localhost podman[77185]: 2025-10-14 08:28:17.540003337 +0000 UTC m=+0.082838917 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33) Oct 14 04:28:17 localhost podman[77187]: 2025-10-14 08:28:17.589433761 +0000 UTC m=+0.124177147 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=) Oct 14 04:28:17 localhost podman[77186]: 2025-10-14 08:28:17.653489841 +0000 UTC m=+0.191956235 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.33.12, release=1) Oct 14 04:28:17 localhost podman[77186]: 2025-10-14 08:28:17.665087567 +0000 UTC m=+0.203553971 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Oct 14 04:28:17 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:28:17 localhost podman[77187]: 2025-10-14 08:28:17.706024547 +0000 UTC m=+0.240767933 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1) Oct 14 04:28:17 localhost podman[77185]: 2025-10-14 08:28:17.719167213 +0000 UTC m=+0.262002833 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute) Oct 14 04:28:17 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:28:17 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:28:19 localhost podman[77257]: 2025-10-14 08:28:19.538258686 +0000 UTC m=+0.082674373 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 14 04:28:19 localhost podman[77257]: 2025-10-14 08:28:19.911200855 +0000 UTC m=+0.455616532 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, container_name=nova_migration_target, release=1, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:28:19 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:28:21 localhost systemd[1]: tmp-crun.gGDyah.mount: Deactivated successfully. Oct 14 04:28:21 localhost systemd[1]: tmp-crun.TUPNAs.mount: Deactivated successfully. Oct 14 04:28:21 localhost podman[77282]: 2025-10-14 08:28:21.604358885 +0000 UTC m=+0.132154987 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=nova_compute, vcs-type=git, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:28:21 localhost podman[77281]: 2025-10-14 08:28:21.566358123 +0000 UTC m=+0.097765390 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, release=1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:28:21 localhost podman[77281]: 2025-10-14 08:28:21.645873401 +0000 UTC m=+0.177280678 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:28:21 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:28:21 localhost podman[77282]: 2025-10-14 08:28:21.667144062 +0000 UTC m=+0.194940184 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 14 04:28:21 localhost podman[77282]: unhealthy Oct 14 04:28:21 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:28:21 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:28:21 localhost podman[77280]: 2025-10-14 08:28:21.764601724 +0000 UTC m=+0.297842260 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, batch=17.1_20250721.1) Oct 14 04:28:21 localhost podman[77280]: 2025-10-14 08:28:21.813527064 +0000 UTC m=+0.346767680 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, release=1, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:28:21 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:28:23 localhost sshd[77348]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:28:24 localhost podman[77350]: 2025-10-14 08:28:24.510508768 +0000 UTC m=+0.093426836 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., vcs-type=git, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59) Oct 14 04:28:24 localhost podman[77350]: 2025-10-14 08:28:24.711403488 +0000 UTC m=+0.294321626 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1) Oct 14 04:28:24 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:28:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:28:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:28:40 localhost systemd[1]: tmp-crun.8IoNmL.mount: Deactivated successfully. Oct 14 04:28:40 localhost podman[77458]: 2025-10-14 08:28:40.541818979 +0000 UTC m=+0.082517058 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.9, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, release=2) Oct 14 04:28:40 localhost podman[77458]: 2025-10-14 08:28:40.555999243 +0000 UTC m=+0.096697282 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:28:40 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:28:40 localhost podman[77459]: 2025-10-14 08:28:40.645003332 +0000 UTC m=+0.179208560 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, container_name=iscsid, version=17.1.9, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container) Oct 14 04:28:40 localhost podman[77459]: 2025-10-14 08:28:40.680160059 +0000 UTC m=+0.214365297 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, vcs-type=git, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, vendor=Red Hat, Inc.) Oct 14 04:28:40 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:28:48 localhost systemd[1]: tmp-crun.edtUTG.mount: Deactivated successfully. Oct 14 04:28:48 localhost podman[77498]: 2025-10-14 08:28:48.568440964 +0000 UTC m=+0.101743556 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:52) Oct 14 04:28:48 localhost podman[77497]: 2025-10-14 08:28:48.611305045 +0000 UTC m=+0.146579448 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:28:48 localhost podman[77499]: 2025-10-14 08:28:48.670092066 +0000 UTC m=+0.200295936 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, config_id=tripleo_step4) Oct 14 04:28:48 localhost podman[77498]: 2025-10-14 08:28:48.686194991 +0000 UTC m=+0.219497573 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, release=1, tcib_managed=true, build-date=2025-07-21T13:07:52, distribution-scope=public, com.redhat.component=openstack-cron-container) Oct 14 04:28:48 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:28:48 localhost podman[77499]: 2025-10-14 08:28:48.706109866 +0000 UTC m=+0.236313746 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1) Oct 14 04:28:48 localhost podman[77497]: 2025-10-14 08:28:48.722465058 +0000 UTC m=+0.257739461 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, version=17.1.9, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:28:48 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:28:48 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:28:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:28:50 localhost systemd[1]: tmp-crun.0baSF0.mount: Deactivated successfully. Oct 14 04:28:50 localhost podman[77569]: 2025-10-14 08:28:50.555988821 +0000 UTC m=+0.098274813 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, version=17.1.9) Oct 14 04:28:50 localhost podman[77569]: 2025-10-14 08:28:50.906244252 +0000 UTC m=+0.448530234 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 14 04:28:50 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:28:52 localhost podman[77593]: 2025-10-14 08:28:52.550324867 +0000 UTC m=+0.080289789 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible) Oct 14 04:28:52 localhost systemd[1]: tmp-crun.rubUpq.mount: Deactivated successfully. Oct 14 04:28:52 localhost podman[77592]: 2025-10-14 08:28:52.597805984 +0000 UTC m=+0.133358123 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent) Oct 14 04:28:52 localhost podman[77593]: 2025-10-14 08:28:52.610105351 +0000 UTC m=+0.140070273 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:28:52 localhost podman[77593]: unhealthy Oct 14 04:28:52 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:28:52 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:28:52 localhost podman[77591]: 2025-10-14 08:28:52.613094451 +0000 UTC m=+0.150175020 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:28:52 localhost podman[77592]: 2025-10-14 08:28:52.667188275 +0000 UTC m=+0.202740384 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:28:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:28:52 localhost podman[77591]: 2025-10-14 08:28:52.696147228 +0000 UTC m=+0.233227817 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Oct 14 04:28:52 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:28:55 localhost systemd[1]: tmp-crun.rCHrNc.mount: Deactivated successfully. Oct 14 04:28:55 localhost podman[77659]: 2025-10-14 08:28:55.555958451 +0000 UTC m=+0.096867006 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, build-date=2025-07-21T13:07:59, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:28:55 localhost podman[77659]: 2025-10-14 08:28:55.785185438 +0000 UTC m=+0.326093973 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:28:55 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:29:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:29:02 localhost recover_tripleo_nova_virtqemud[77689]: 62532 Oct 14 04:29:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:29:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:29:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:29:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:29:11 localhost podman[77691]: 2025-10-14 08:29:11.559447574 +0000 UTC m=+0.098698315 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, tcib_managed=true, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:29:11 localhost podman[77692]: 2025-10-14 08:29:11.605544784 +0000 UTC m=+0.142392191 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1, distribution-scope=public) Oct 14 04:29:11 localhost podman[77691]: 2025-10-14 08:29:11.627755746 +0000 UTC m=+0.167006477 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2) Oct 14 04:29:11 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:29:11 localhost podman[77692]: 2025-10-14 08:29:11.646268821 +0000 UTC m=+0.183116268 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, container_name=iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:29:11 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:29:19 localhost podman[77734]: 2025-10-14 08:29:19.552296467 +0000 UTC m=+0.081861035 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, architecture=x86_64, version=17.1.9, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:29:19 localhost podman[77733]: 2025-10-14 08:29:19.614705942 +0000 UTC m=+0.147018643 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:29:19 localhost podman[77732]: 2025-10-14 08:29:19.661993875 +0000 UTC m=+0.196845864 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 14 04:29:19 localhost podman[77734]: 2025-10-14 08:29:19.6663062 +0000 UTC m=+0.195870758 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi) Oct 14 04:29:19 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:29:19 localhost podman[77733]: 2025-10-14 08:29:19.678883265 +0000 UTC m=+0.211195926 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true) Oct 14 04:29:19 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:29:19 localhost podman[77732]: 2025-10-14 08:29:19.714085485 +0000 UTC m=+0.248937494 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1) Oct 14 04:29:19 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:29:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:29:21 localhost podman[77807]: 2025-10-14 08:29:21.553163339 +0000 UTC m=+0.093800123 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.expose-services=) Oct 14 04:29:21 localhost podman[77807]: 2025-10-14 08:29:21.955372772 +0000 UTC m=+0.496009616 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:29:21 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:29:23 localhost systemd[1]: tmp-crun.BGcWr9.mount: Deactivated successfully. Oct 14 04:29:23 localhost podman[77831]: 2025-10-14 08:29:23.560483723 +0000 UTC m=+0.099402524 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64) Oct 14 04:29:23 localhost podman[77831]: 2025-10-14 08:29:23.595162688 +0000 UTC m=+0.134081509 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:29:23 localhost systemd[1]: tmp-crun.Il5TQO.mount: Deactivated successfully. Oct 14 04:29:23 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:29:23 localhost podman[77832]: 2025-10-14 08:29:23.620569696 +0000 UTC m=+0.153901728 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, container_name=ovn_metadata_agent, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 14 04:29:23 localhost podman[77833]: 2025-10-14 08:29:23.659189517 +0000 UTC m=+0.190101484 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true) Oct 14 04:29:23 localhost podman[77832]: 2025-10-14 08:29:23.669881323 +0000 UTC m=+0.203213385 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1) Oct 14 04:29:23 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:29:23 localhost podman[77833]: 2025-10-14 08:29:23.746537598 +0000 UTC m=+0.277449595 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, release=1, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:29:23 localhost podman[77833]: unhealthy Oct 14 04:29:23 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:29:23 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:29:26 localhost podman[77900]: 2025-10-14 08:29:26.54561207 +0000 UTC m=+0.081253020 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:29:26 localhost podman[77900]: 2025-10-14 08:29:26.809200463 +0000 UTC m=+0.344841393 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 04:29:26 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:29:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:29:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:29:42 localhost systemd[1]: tmp-crun.vbQllD.mount: Deactivated successfully. Oct 14 04:29:42 localhost podman[78007]: 2025-10-14 08:29:42.568570555 +0000 UTC m=+0.099954889 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1) Oct 14 04:29:42 localhost podman[78007]: 2025-10-14 08:29:42.580893863 +0000 UTC m=+0.112278237 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 14 04:29:42 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:29:42 localhost podman[78006]: 2025-10-14 08:29:42.649077243 +0000 UTC m=+0.181656888 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, version=17.1.9, release=2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:29:42 localhost podman[78006]: 2025-10-14 08:29:42.659932302 +0000 UTC m=+0.192511977 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, distribution-scope=public, name=rhosp17/openstack-collectd, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12) Oct 14 04:29:42 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:29:50 localhost podman[78048]: 2025-10-14 08:29:50.579397709 +0000 UTC m=+0.113701025 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T15:29:47) Oct 14 04:29:50 localhost podman[78046]: 2025-10-14 08:29:50.672193096 +0000 UTC m=+0.209930103 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:29:50 localhost podman[78047]: 2025-10-14 08:29:50.720903905 +0000 UTC m=+0.256357352 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=) Oct 14 04:29:50 localhost podman[78046]: 2025-10-14 08:29:50.738168646 +0000 UTC m=+0.275905683 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, release=1, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.9) Oct 14 04:29:50 localhost podman[78048]: 2025-10-14 08:29:50.738654549 +0000 UTC m=+0.272957865 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Oct 14 04:29:50 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:29:50 localhost podman[78047]: 2025-10-14 08:29:50.761273882 +0000 UTC m=+0.296727339 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-cron) Oct 14 04:29:50 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:29:50 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:29:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:29:52 localhost systemd[1]: tmp-crun.rJxBxS.mount: Deactivated successfully. Oct 14 04:29:52 localhost podman[78117]: 2025-10-14 08:29:52.546973593 +0000 UTC m=+0.092171931 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 04:29:52 localhost podman[78117]: 2025-10-14 08:29:52.948191429 +0000 UTC m=+0.493389727 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 14 04:29:52 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:29:54 localhost podman[78140]: 2025-10-14 08:29:54.541583127 +0000 UTC m=+0.081625098 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.9) Oct 14 04:29:54 localhost podman[78142]: 2025-10-14 08:29:54.562450584 +0000 UTC m=+0.094194074 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:29:54 localhost podman[78140]: 2025-10-14 08:29:54.570430957 +0000 UTC m=+0.110472958 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, release=1) Oct 14 04:29:54 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:29:54 localhost podman[78141]: 2025-10-14 08:29:54.648016538 +0000 UTC m=+0.181714920 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public) Oct 14 04:29:54 localhost podman[78142]: 2025-10-14 08:29:54.650158435 +0000 UTC m=+0.181901995 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 14 04:29:54 localhost podman[78142]: unhealthy Oct 14 04:29:54 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:29:54 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 04:29:54 localhost podman[78141]: 2025-10-14 08:29:54.737638789 +0000 UTC m=+0.271337171 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible) Oct 14 04:29:54 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:29:57 localhost systemd[1]: tmp-crun.I8EIc8.mount: Deactivated successfully. Oct 14 04:29:57 localhost podman[78208]: 2025-10-14 08:29:57.553787026 +0000 UTC m=+0.094690138 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, architecture=x86_64, release=1, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public) Oct 14 04:29:57 localhost podman[78208]: 2025-10-14 08:29:57.780369512 +0000 UTC m=+0.321272554 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:29:57 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:30:08 localhost sshd[78238]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:30:08 localhost sshd[78239]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:30:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:30:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:30:13 localhost podman[78241]: 2025-10-14 08:30:13.560783904 +0000 UTC m=+0.093262499 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, io.openshift.expose-services=) Oct 14 04:30:13 localhost systemd[1]: tmp-crun.1mDvrb.mount: Deactivated successfully. Oct 14 04:30:13 localhost podman[78240]: 2025-10-14 08:30:13.606201406 +0000 UTC m=+0.142122193 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, vcs-type=git, release=2, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd) Oct 14 04:30:13 localhost podman[78240]: 2025-10-14 08:30:13.619174753 +0000 UTC m=+0.155095530 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:30:13 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:30:13 localhost podman[78241]: 2025-10-14 08:30:13.673149403 +0000 UTC m=+0.205627948 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Oct 14 04:30:13 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:30:21 localhost systemd[1]: tmp-crun.WFalu5.mount: Deactivated successfully. Oct 14 04:30:21 localhost podman[78282]: 2025-10-14 08:30:21.567417456 +0000 UTC m=+0.095049668 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, container_name=logrotate_crond, distribution-scope=public) Oct 14 04:30:21 localhost podman[78281]: 2025-10-14 08:30:21.619099685 +0000 UTC m=+0.150488537 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 04:30:21 localhost podman[78282]: 2025-10-14 08:30:21.637019373 +0000 UTC m=+0.164651585 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, name=rhosp17/openstack-cron, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Oct 14 04:30:21 localhost podman[78281]: 2025-10-14 08:30:21.650249676 +0000 UTC m=+0.181638538 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:45:33) Oct 14 04:30:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:30:21 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:30:21 localhost podman[78283]: 2025-10-14 08:30:21.712832856 +0000 UTC m=+0.240732775 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:30:21 localhost podman[78283]: 2025-10-14 08:30:21.747216033 +0000 UTC m=+0.275115952 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:30:21 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:30:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:30:23 localhost systemd[1]: tmp-crun.bLyygm.mount: Deactivated successfully. Oct 14 04:30:23 localhost podman[78398]: 2025-10-14 08:30:23.532494003 +0000 UTC m=+0.069142046 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37) Oct 14 04:30:23 localhost podman[78398]: 2025-10-14 08:30:23.922712775 +0000 UTC m=+0.459360828 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:30:23 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:30:25 localhost systemd[1]: tmp-crun.TOUwu3.mount: Deactivated successfully. Oct 14 04:30:25 localhost podman[78464]: 2025-10-14 08:30:25.559134562 +0000 UTC m=+0.096978019 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:28:44, release=1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 04:30:25 localhost podman[78465]: 2025-10-14 08:30:25.602621443 +0000 UTC m=+0.140261864 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Oct 14 04:30:25 localhost podman[78464]: 2025-10-14 08:30:25.610177744 +0000 UTC m=+0.148021181 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:30:25 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:30:25 localhost podman[78466]: 2025-10-14 08:30:25.659149851 +0000 UTC m=+0.192808866 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, tcib_managed=true, version=17.1.9, architecture=x86_64, distribution-scope=public, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:30:25 localhost podman[78465]: 2025-10-14 08:30:25.675527338 +0000 UTC m=+0.213167689 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Oct 14 04:30:25 localhost podman[78466]: 2025-10-14 08:30:25.687934319 +0000 UTC m=+0.221593394 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_compute, distribution-scope=public, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:30:25 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:30:25 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:30:28 localhost podman[78534]: 2025-10-14 08:30:28.553622528 +0000 UTC m=+0.092632923 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 14 04:30:28 localhost podman[78534]: 2025-10-14 08:30:28.775109348 +0000 UTC m=+0.314119743 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9) Oct 14 04:30:28 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:30:32 localhost systemd[1]: libpod-dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1.scope: Deactivated successfully. Oct 14 04:30:32 localhost podman[78564]: 2025-10-14 08:30:32.867440339 +0000 UTC m=+0.048852225 container died dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=nova_wait_for_compute_service, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}) Oct 14 04:30:32 localhost systemd[1]: tmp-crun.6Q1DsZ.mount: Deactivated successfully. Oct 14 04:30:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1-userdata-shm.mount: Deactivated successfully. Oct 14 04:30:32 localhost systemd[1]: var-lib-containers-storage-overlay-79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f-merged.mount: Deactivated successfully. Oct 14 04:30:32 localhost podman[78564]: 2025-10-14 08:30:32.91282096 +0000 UTC m=+0.094232776 container cleanup dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}) Oct 14 04:30:32 localhost systemd[1]: libpod-conmon-dcf2a65bbb853a5bb3fa44fff63c6a3fdd869ed21b915c5625371c5359108ac1.scope: Deactivated successfully. Oct 14 04:30:32 localhost python3[76580]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=f5be0e0347f8a081fe8927c6f95950cc --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 14 04:30:33 localhost python3[78617]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:30:33 localhost python3[78633]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 04:30:34 localhost python3[78694]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1760430633.8113544-119109-215500702037914/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:30:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:30:34 localhost recover_tripleo_nova_virtqemud[78712]: 62532 Oct 14 04:30:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:30:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:30:34 localhost python3[78710]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 04:30:34 localhost systemd[1]: Reloading. Oct 14 04:30:34 localhost systemd-rc-local-generator[78731]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:30:34 localhost systemd-sysv-generator[78735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:30:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:30:35 localhost python3[78764]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 04:30:36 localhost systemd[1]: Reloading. Oct 14 04:30:36 localhost systemd-sysv-generator[78792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:30:36 localhost systemd-rc-local-generator[78789]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:30:36 localhost systemd[1]: Starting nova_compute container... Oct 14 04:30:36 localhost tripleo-start-podman-container[78804]: Creating additional drop-in dependency for "nova_compute" (a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e) Oct 14 04:30:36 localhost systemd[1]: Reloading. Oct 14 04:30:36 localhost systemd-sysv-generator[78865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 04:30:36 localhost systemd-rc-local-generator[78862]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 04:30:36 localhost systemd[1]: Started nova_compute container. Oct 14 04:30:37 localhost python3[78900]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:30:38 localhost python3[79051]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005486731 step=5 update_config_hash_only=False Oct 14 04:30:39 localhost python3[79100]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 04:30:39 localhost python3[79116]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 14 04:30:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:30:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:30:44 localhost podman[79132]: 2025-10-14 08:30:44.545828471 +0000 UTC m=+0.087090265 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:30:44 localhost systemd[1]: tmp-crun.inP98q.mount: Deactivated successfully. Oct 14 04:30:44 localhost podman[79132]: 2025-10-14 08:30:44.567203141 +0000 UTC m=+0.108464935 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=2, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=) Oct 14 04:30:44 localhost podman[79133]: 2025-10-14 08:30:44.567446137 +0000 UTC m=+0.104573281 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1, vcs-type=git, vendor=Red Hat, Inc.) Oct 14 04:30:44 localhost podman[79133]: 2025-10-14 08:30:44.607163698 +0000 UTC m=+0.144290842 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15) Oct 14 04:30:44 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:30:44 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:30:52 localhost podman[79171]: 2025-10-14 08:30:52.555785013 +0000 UTC m=+0.091192515 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T14:45:33, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:30:52 localhost podman[79171]: 2025-10-14 08:30:52.616159753 +0000 UTC m=+0.151567306 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1) Oct 14 04:30:52 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:30:52 localhost podman[79173]: 2025-10-14 08:30:52.616907114 +0000 UTC m=+0.142948115 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 04:30:52 localhost podman[79173]: 2025-10-14 08:30:52.702101917 +0000 UTC m=+0.228142908 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1) Oct 14 04:30:52 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:30:52 localhost podman[79172]: 2025-10-14 08:30:52.669618861 +0000 UTC m=+0.197148182 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, container_name=logrotate_crond, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 14 04:30:52 localhost podman[79172]: 2025-10-14 08:30:52.752291427 +0000 UTC m=+0.279820788 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, version=17.1.9, vendor=Red Hat, Inc.) Oct 14 04:30:52 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:30:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:30:54 localhost podman[79240]: 2025-10-14 08:30:54.565552181 +0000 UTC m=+0.106627376 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 14 04:30:54 localhost podman[79240]: 2025-10-14 08:30:54.94322867 +0000 UTC m=+0.484303825 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute) Oct 14 04:30:54 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:30:56 localhost systemd[1]: tmp-crun.4ajW6B.mount: Deactivated successfully. Oct 14 04:30:56 localhost podman[79263]: 2025-10-14 08:30:56.545696661 +0000 UTC m=+0.083871800 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, distribution-scope=public) Oct 14 04:30:56 localhost podman[79263]: 2025-10-14 08:30:56.569533507 +0000 UTC m=+0.107708666 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller) Oct 14 04:30:56 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:30:56 localhost podman[79264]: 2025-10-14 08:30:56.654458533 +0000 UTC m=+0.187870504 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 14 04:30:56 localhost podman[79265]: 2025-10-14 08:30:56.709682797 +0000 UTC m=+0.240087238 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, release=1, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:30:56 localhost podman[79264]: 2025-10-14 08:30:56.721359278 +0000 UTC m=+0.254771259 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 14 04:30:56 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:30:56 localhost podman[79265]: 2025-10-14 08:30:56.745196744 +0000 UTC m=+0.275601215 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step5, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:30:56 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:30:59 localhost systemd[1]: tmp-crun.m90YGW.mount: Deactivated successfully. Oct 14 04:30:59 localhost podman[79332]: 2025-10-14 08:30:59.554987171 +0000 UTC m=+0.098962921 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:30:59 localhost podman[79332]: 2025-10-14 08:30:59.750025466 +0000 UTC m=+0.294001216 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Oct 14 04:30:59 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:31:05 localhost sshd[79360]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:31:05 localhost systemd-logind[760]: New session 33 of user zuul. Oct 14 04:31:05 localhost systemd[1]: Started Session 33 of User zuul. Oct 14 04:31:06 localhost python3[79469]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 04:31:14 localhost python3[79732]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Oct 14 04:31:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:31:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:31:15 localhost systemd[1]: tmp-crun.h4gXLB.mount: Deactivated successfully. Oct 14 04:31:15 localhost podman[79735]: 2025-10-14 08:31:15.591906498 +0000 UTC m=+0.121762340 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2) Oct 14 04:31:15 localhost podman[79735]: 2025-10-14 08:31:15.626270435 +0000 UTC m=+0.156126277 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-collectd) Oct 14 04:31:15 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:31:15 localhost podman[79736]: 2025-10-14 08:31:15.675410846 +0000 UTC m=+0.205274448 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Oct 14 04:31:15 localhost podman[79736]: 2025-10-14 08:31:15.688014322 +0000 UTC m=+0.217877934 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, tcib_managed=true, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Oct 14 04:31:15 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:31:21 localhost python3[79864]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Oct 14 04:31:21 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Oct 14 04:31:21 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Oct 14 04:31:21 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 04:31:21 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 04:31:21 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 04:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:31:23 localhost systemd[1]: tmp-crun.atC3Tm.mount: Deactivated successfully. Oct 14 04:31:23 localhost podman[79931]: 2025-10-14 08:31:23.575775703 +0000 UTC m=+0.075709791 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:31:23 localhost podman[79933]: 2025-10-14 08:31:23.602528428 +0000 UTC m=+0.093512897 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9) Oct 14 04:31:23 localhost podman[79931]: 2025-10-14 08:31:23.63933318 +0000 UTC m=+0.139267268 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:31:23 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:31:23 localhost podman[79933]: 2025-10-14 08:31:23.656519348 +0000 UTC m=+0.147503757 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 04:31:23 localhost podman[79932]: 2025-10-14 08:31:23.656490127 +0000 UTC m=+0.150468986 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1) Oct 14 04:31:23 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:31:23 localhost podman[79932]: 2025-10-14 08:31:23.739231995 +0000 UTC m=+0.233210854 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, container_name=logrotate_crond) Oct 14 04:31:23 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:31:24 localhost systemd[1]: tmp-crun.8eGmFq.mount: Deactivated successfully. Oct 14 04:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:31:25 localhost podman[80003]: 2025-10-14 08:31:25.541225221 +0000 UTC m=+0.082693658 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 04:31:25 localhost podman[80003]: 2025-10-14 08:31:25.90618638 +0000 UTC m=+0.447654817 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Oct 14 04:31:25 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:31:27 localhost systemd[1]: tmp-crun.kFlJ46.mount: Deactivated successfully. Oct 14 04:31:27 localhost podman[80028]: 2025-10-14 08:31:27.55921738 +0000 UTC m=+0.100517104 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team) Oct 14 04:31:27 localhost podman[80029]: 2025-10-14 08:31:27.608271099 +0000 UTC m=+0.140700966 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, managed_by=tripleo_ansible, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:31:27 localhost podman[80028]: 2025-10-14 08:31:27.63004406 +0000 UTC m=+0.171343714 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9) Oct 14 04:31:27 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:31:27 localhost podman[80027]: 2025-10-14 08:31:27.643991782 +0000 UTC m=+0.186787475 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 14 04:31:27 localhost podman[80029]: 2025-10-14 08:31:27.702221696 +0000 UTC m=+0.234651483 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 14 04:31:27 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:31:27 localhost podman[80027]: 2025-10-14 08:31:27.731079715 +0000 UTC m=+0.273875348 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, architecture=x86_64, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public) Oct 14 04:31:27 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:31:30 localhost systemd[1]: tmp-crun.pIVHXW.mount: Deactivated successfully. Oct 14 04:31:30 localhost podman[80096]: 2025-10-14 08:31:30.544628524 +0000 UTC m=+0.090015853 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:31:30 localhost podman[80096]: 2025-10-14 08:31:30.733266477 +0000 UTC m=+0.278653816 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:31:30 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:31:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:31:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:31:46 localhost podman[80202]: 2025-10-14 08:31:46.544374287 +0000 UTC m=+0.083421928 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, release=2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, distribution-scope=public) Oct 14 04:31:46 localhost podman[80202]: 2025-10-14 08:31:46.556009778 +0000 UTC m=+0.095057409 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=) Oct 14 04:31:46 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:31:46 localhost podman[80203]: 2025-10-14 08:31:46.645145316 +0000 UTC m=+0.182692486 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:27:15, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:31:46 localhost podman[80203]: 2025-10-14 08:31:46.650457648 +0000 UTC m=+0.188004808 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, release=1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:31:46 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:31:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:31:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:31:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:31:54 localhost systemd[1]: tmp-crun.b7Tair.mount: Deactivated successfully. Oct 14 04:31:54 localhost podman[80241]: 2025-10-14 08:31:54.545971776 +0000 UTC m=+0.079885312 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 14 04:31:54 localhost podman[80241]: 2025-10-14 08:31:54.603082131 +0000 UTC m=+0.136995667 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-type=git) Oct 14 04:31:54 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:31:54 localhost podman[80239]: 2025-10-14 08:31:54.604796946 +0000 UTC m=+0.144576698 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:31:54 localhost podman[80239]: 2025-10-14 08:31:54.687996526 +0000 UTC m=+0.227776198 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 04:31:54 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:31:54 localhost podman[80240]: 2025-10-14 08:31:54.656759333 +0000 UTC m=+0.193851054 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=logrotate_crond, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public) Oct 14 04:31:54 localhost podman[80240]: 2025-10-14 08:31:54.741082563 +0000 UTC m=+0.278174294 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64) Oct 14 04:31:54 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:31:56 localhost systemd[1]: tmp-crun.VS0sLF.mount: Deactivated successfully. Oct 14 04:31:56 localhost podman[80312]: 2025-10-14 08:31:56.55438609 +0000 UTC m=+0.098093289 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:31:56 localhost podman[80312]: 2025-10-14 08:31:56.938613122 +0000 UTC m=+0.482320371 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:31:56 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:31:58 localhost systemd[1]: tmp-crun.nlc2QE.mount: Deactivated successfully. Oct 14 04:31:58 localhost podman[80337]: 2025-10-14 08:31:58.557564113 +0000 UTC m=+0.100340068 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1) Oct 14 04:31:58 localhost podman[80339]: 2025-10-14 08:31:58.603413247 +0000 UTC m=+0.138415345 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5) Oct 14 04:31:58 localhost podman[80337]: 2025-10-14 08:31:58.609392556 +0000 UTC m=+0.152168521 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.9, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:31:58 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:31:58 localhost podman[80339]: 2025-10-14 08:31:58.635311718 +0000 UTC m=+0.170313756 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, release=1, vendor=Red Hat, Inc., container_name=nova_compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:31:58 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:31:58 localhost podman[80338]: 2025-10-14 08:31:58.651541911 +0000 UTC m=+0.188752467 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, release=1) Oct 14 04:31:58 localhost podman[80338]: 2025-10-14 08:31:58.701009711 +0000 UTC m=+0.238220267 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:31:58 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:31:59 localhost systemd[1]: tmp-crun.i6SerC.mount: Deactivated successfully. Oct 14 04:32:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:32:01 localhost podman[80408]: 2025-10-14 08:32:01.527008581 +0000 UTC m=+0.072830984 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:32:01 localhost podman[80408]: 2025-10-14 08:32:01.749473168 +0000 UTC m=+0.295295551 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:32:01 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:32:06 localhost sshd[80437]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:32:07 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:32:07 localhost recover_tripleo_nova_virtqemud[80440]: 62532 Oct 14 04:32:07 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:32:07 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:32:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:32:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:32:17 localhost podman[80443]: 2025-10-14 08:32:17.531948043 +0000 UTC m=+0.078052444 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 04:32:17 localhost podman[80443]: 2025-10-14 08:32:17.570435209 +0000 UTC m=+0.116539610 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:32:17 localhost systemd[1]: tmp-crun.Hksm9Q.mount: Deactivated successfully. Oct 14 04:32:17 localhost podman[80442]: 2025-10-14 08:32:17.586538289 +0000 UTC m=+0.134219403 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public) Oct 14 04:32:17 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:32:17 localhost podman[80442]: 2025-10-14 08:32:17.601152559 +0000 UTC m=+0.148833673 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, version=17.1.9) Oct 14 04:32:17 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:32:21 localhost systemd[1]: session-33.scope: Deactivated successfully. Oct 14 04:32:21 localhost systemd[1]: session-33.scope: Consumed 6.018s CPU time. Oct 14 04:32:21 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Oct 14 04:32:21 localhost systemd-logind[760]: Removed session 33. Oct 14 04:32:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:32:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:32:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:32:25 localhost podman[80526]: 2025-10-14 08:32:25.54629549 +0000 UTC m=+0.082828081 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Oct 14 04:32:25 localhost podman[80526]: 2025-10-14 08:32:25.578858199 +0000 UTC m=+0.115390780 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=) Oct 14 04:32:25 localhost podman[80527]: 2025-10-14 08:32:25.599479019 +0000 UTC m=+0.132892377 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=logrotate_crond) Oct 14 04:32:25 localhost podman[80527]: 2025-10-14 08:32:25.614035808 +0000 UTC m=+0.147449146 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, release=1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible) Oct 14 04:32:25 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:32:25 localhost podman[80528]: 2025-10-14 08:32:25.663044906 +0000 UTC m=+0.193733691 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-type=git) Oct 14 04:32:25 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:32:25 localhost podman[80528]: 2025-10-14 08:32:25.689495132 +0000 UTC m=+0.220183847 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:32:25 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:32:26 localhost systemd[1]: tmp-crun.6wGVdu.mount: Deactivated successfully. Oct 14 04:32:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:32:27 localhost systemd[1]: tmp-crun.hSfYr8.mount: Deactivated successfully. Oct 14 04:32:27 localhost podman[80594]: 2025-10-14 08:32:27.555539736 +0000 UTC m=+0.091624066 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 14 04:32:27 localhost podman[80594]: 2025-10-14 08:32:27.923021122 +0000 UTC m=+0.459105452 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute) Oct 14 04:32:27 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:32:28 localhost sshd[80617]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:32:28 localhost systemd-logind[760]: New session 34 of user zuul. Oct 14 04:32:28 localhost systemd[1]: Started Session 34 of User zuul. Oct 14 04:32:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:32:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:32:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:32:28 localhost podman[80637]: 2025-10-14 08:32:28.827938259 +0000 UTC m=+0.093997469 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, distribution-scope=public, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12) Oct 14 04:32:28 localhost systemd[1]: tmp-crun.SpuxEI.mount: Deactivated successfully. Oct 14 04:32:28 localhost podman[80639]: 2025-10-14 08:32:28.89202931 +0000 UTC m=+0.150009335 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, container_name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true) Oct 14 04:32:28 localhost podman[80637]: 2025-10-14 08:32:28.905475329 +0000 UTC m=+0.171534459 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9) Oct 14 04:32:28 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:32:28 localhost podman[80639]: 2025-10-14 08:32:28.942133907 +0000 UTC m=+0.200113932 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 14 04:32:28 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:32:28 localhost podman[80638]: 2025-10-14 08:32:28.981518788 +0000 UTC m=+0.244322982 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible) Oct 14 04:32:28 localhost python3[80636]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:32:29 localhost podman[80638]: 2025-10-14 08:32:29.007805889 +0000 UTC m=+0.270610023 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 14 04:32:29 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:32:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:32:32 localhost podman[80707]: 2025-10-14 08:32:32.542780067 +0000 UTC m=+0.078208487 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:32:32 localhost podman[80707]: 2025-10-14 08:32:32.765312415 +0000 UTC m=+0.300740805 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 14 04:32:32 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:32:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:32:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5222 writes, 23K keys, 5222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5222 writes, 566 syncs, 9.23 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:32:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:32:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4291 writes, 19K keys, 4291 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4291 writes, 450 syncs, 9.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:32:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:32:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:32:48 localhost podman[80815]: 2025-10-14 08:32:48.561458277 +0000 UTC m=+0.097377440 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, vcs-type=git, container_name=iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:32:48 localhost podman[80814]: 2025-10-14 08:32:48.605914874 +0000 UTC m=+0.144998971 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:32:48 localhost podman[80815]: 2025-10-14 08:32:48.602986155 +0000 UTC m=+0.138905328 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git) Oct 14 04:32:48 localhost podman[80814]: 2025-10-14 08:32:48.621201931 +0000 UTC m=+0.160286048 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 14 04:32:48 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:32:48 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:32:54 localhost python3[80868]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 14 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:32:56 localhost podman[80872]: 2025-10-14 08:32:56.541821619 +0000 UTC m=+0.080247142 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, version=17.1.9, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 14 04:32:56 localhost podman[80872]: 2025-10-14 08:32:56.591883265 +0000 UTC m=+0.130308788 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 14 04:32:56 localhost podman[80871]: 2025-10-14 08:32:56.600469314 +0000 UTC m=+0.138939829 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team) Oct 14 04:32:56 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:32:56 localhost podman[80871]: 2025-10-14 08:32:56.636543727 +0000 UTC m=+0.175014202 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-07-21T13:07:52) Oct 14 04:32:56 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:32:56 localhost podman[80870]: 2025-10-14 08:32:56.65430111 +0000 UTC m=+0.198371604 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:32:56 localhost podman[80870]: 2025-10-14 08:32:56.68353523 +0000 UTC m=+0.227605674 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 14 04:32:56 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:32:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:32:58 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 04:32:58 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 04:32:58 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 04:32:58 localhost podman[80948]: 2025-10-14 08:32:58.458403102 +0000 UTC m=+0.109120593 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 14 04:32:58 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 04:32:58 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 04:32:58 localhost systemd[1]: run-rd4e0e0e1be264394a32032460d83c3e9.service: Deactivated successfully. Oct 14 04:32:58 localhost systemd[1]: run-rd34feb1faef14394a9b271e5151865d8.service: Deactivated successfully. Oct 14 04:32:58 localhost podman[80948]: 2025-10-14 08:32:58.830436799 +0000 UTC m=+0.481154320 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 14 04:32:58 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:32:59 localhost podman[81114]: 2025-10-14 08:32:59.546102436 +0000 UTC m=+0.082123022 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:32:59 localhost podman[81115]: 2025-10-14 08:32:59.609539429 +0000 UTC m=+0.142443692 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_compute, config_id=tripleo_step5) Oct 14 04:32:59 localhost podman[81113]: 2025-10-14 08:32:59.665510023 +0000 UTC m=+0.203241905 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=ovn_controller, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Oct 14 04:32:59 localhost podman[81114]: 2025-10-14 08:32:59.680921134 +0000 UTC m=+0.216941730 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git) Oct 14 04:32:59 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:32:59 localhost podman[81115]: 2025-10-14 08:32:59.718119017 +0000 UTC m=+0.251023300 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute) Oct 14 04:32:59 localhost podman[81113]: 2025-10-14 08:32:59.725400621 +0000 UTC m=+0.263132473 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:32:59 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:32:59 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:33:00 localhost systemd[1]: tmp-crun.Mnml6a.mount: Deactivated successfully. Oct 14 04:33:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:33:03 localhost systemd[1]: tmp-crun.0Sqq09.mount: Deactivated successfully. Oct 14 04:33:03 localhost podman[81187]: 2025-10-14 08:33:03.54707816 +0000 UTC m=+0.091636276 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:33:03 localhost podman[81187]: 2025-10-14 08:33:03.748980609 +0000 UTC m=+0.293538705 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, managed_by=tripleo_ansible) Oct 14 04:33:03 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:33:19 localhost podman[81216]: 2025-10-14 08:33:19.547226852 +0000 UTC m=+0.086682423 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 04:33:19 localhost podman[81217]: 2025-10-14 08:33:19.593840425 +0000 UTC m=+0.131618161 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:33:19 localhost podman[81217]: 2025-10-14 08:33:19.602970676 +0000 UTC m=+0.140748382 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:33:19 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:33:19 localhost podman[81216]: 2025-10-14 08:33:19.613144004 +0000 UTC m=+0.152599545 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, version=17.1.9, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible) Oct 14 04:33:19 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:33:27 localhost podman[81298]: 2025-10-14 08:33:27.559841078 +0000 UTC m=+0.094986432 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, container_name=ceilometer_agent_compute, release=1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=) Oct 14 04:33:27 localhost podman[81298]: 2025-10-14 08:33:27.596091876 +0000 UTC m=+0.131237230 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.9, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public) Oct 14 04:33:27 localhost podman[81299]: 2025-10-14 08:33:27.606896062 +0000 UTC m=+0.138612454 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:33:27 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:33:27 localhost podman[81299]: 2025-10-14 08:33:27.622872784 +0000 UTC m=+0.154589166 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container) Oct 14 04:33:27 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:33:27 localhost podman[81300]: 2025-10-14 08:33:27.715296977 +0000 UTC m=+0.245932921 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:33:27 localhost podman[81300]: 2025-10-14 08:33:27.773352081 +0000 UTC m=+0.303987995 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12) Oct 14 04:33:27 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:33:29 localhost podman[81364]: 2025-10-14 08:33:29.538996822 +0000 UTC m=+0.084047383 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 04:33:29 localhost podman[81364]: 2025-10-14 08:33:29.888178901 +0000 UTC m=+0.433229472 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, container_name=nova_migration_target) Oct 14 04:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:33:29 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:33:30 localhost podman[81388]: 2025-10-14 08:33:30.012878227 +0000 UTC m=+0.097544089 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 14 04:33:30 localhost podman[81387]: 2025-10-14 08:33:30.060306451 +0000 UTC m=+0.143550455 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public) Oct 14 04:33:30 localhost podman[81387]: 2025-10-14 08:33:30.114249436 +0000 UTC m=+0.197493430 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Oct 14 04:33:30 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:33:30 localhost podman[81388]: 2025-10-14 08:33:30.128859082 +0000 UTC m=+0.213524914 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, container_name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, release=1, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Oct 14 04:33:30 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:33:30 localhost podman[81389]: 2025-10-14 08:33:30.117131993 +0000 UTC m=+0.198513589 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 14 04:33:30 localhost podman[81389]: 2025-10-14 08:33:30.197289362 +0000 UTC m=+0.278671008 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1) Oct 14 04:33:30 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:33:30 localhost systemd[1]: tmp-crun.t7FFiS.mount: Deactivated successfully. Oct 14 04:33:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:33:34 localhost systemd[1]: tmp-crun.fwNhLH.mount: Deactivated successfully. Oct 14 04:33:34 localhost podman[81455]: 2025-10-14 08:33:34.56542227 +0000 UTC m=+0.100217830 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 04:33:34 localhost podman[81455]: 2025-10-14 08:33:34.782578349 +0000 UTC m=+0.317373879 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 14 04:33:34 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:33:35 localhost python3[81500]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:33:38 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 04:33:39 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 04:33:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:33:47 localhost recover_tripleo_nova_virtqemud[81817]: 62532 Oct 14 04:33:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:33:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:33:50 localhost systemd[1]: tmp-crun.bAovHr.mount: Deactivated successfully. Oct 14 04:33:50 localhost podman[81818]: 2025-10-14 08:33:50.558863603 +0000 UTC m=+0.090366639 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, release=2, tcib_managed=true, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:33:50 localhost podman[81818]: 2025-10-14 08:33:50.568329763 +0000 UTC m=+0.099832789 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, release=2, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Oct 14 04:33:50 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:33:50 localhost systemd[1]: tmp-crun.diW8u8.mount: Deactivated successfully. Oct 14 04:33:50 localhost podman[81819]: 2025-10-14 08:33:50.665554493 +0000 UTC m=+0.198073396 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20250721.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Oct 14 04:33:50 localhost podman[81819]: 2025-10-14 08:33:50.700262361 +0000 UTC m=+0.232781254 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-iscsid-container) Oct 14 04:33:50 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:33:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:33:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:33:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:33:58 localhost systemd[1]: tmp-crun.CQOdAE.mount: Deactivated successfully. Oct 14 04:33:58 localhost podman[81857]: 2025-10-14 08:33:58.559220518 +0000 UTC m=+0.099131492 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, distribution-scope=public) Oct 14 04:33:58 localhost podman[81857]: 2025-10-14 08:33:58.586426297 +0000 UTC m=+0.126337251 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, release=1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:33:58 localhost systemd[1]: tmp-crun.l5BjPL.mount: Deactivated successfully. Oct 14 04:33:58 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:33:58 localhost podman[81859]: 2025-10-14 08:33:58.608607124 +0000 UTC m=+0.141512393 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 14 04:33:58 localhost podman[81859]: 2025-10-14 08:33:58.641183504 +0000 UTC m=+0.174088763 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1) Oct 14 04:33:58 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:33:58 localhost podman[81858]: 2025-10-14 08:33:58.659235452 +0000 UTC m=+0.195723805 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-type=git, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team) Oct 14 04:33:58 localhost podman[81858]: 2025-10-14 08:33:58.696165458 +0000 UTC m=+0.232653841 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.9) Oct 14 04:33:58 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:34:00 localhost podman[81930]: 2025-10-14 08:34:00.548396824 +0000 UTC m=+0.091611882 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, version=17.1.9, io.openshift.expose-services=) Oct 14 04:34:00 localhost podman[81930]: 2025-10-14 08:34:00.598106989 +0000 UTC m=+0.141322047 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, distribution-scope=public, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 14 04:34:00 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:34:00 localhost podman[81931]: 2025-10-14 08:34:00.60575831 +0000 UTC m=+0.144355976 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git) Oct 14 04:34:00 localhost systemd[1]: tmp-crun.5vW5kf.mount: Deactivated successfully. Oct 14 04:34:00 localhost podman[81938]: 2025-10-14 08:34:00.67381231 +0000 UTC m=+0.204723953 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:34:00 localhost podman[81938]: 2025-10-14 08:34:00.703115724 +0000 UTC m=+0.234027357 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step5, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:34:00 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:34:00 localhost podman[81932]: 2025-10-14 08:34:00.761100617 +0000 UTC m=+0.297100434 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:34:00 localhost podman[81932]: 2025-10-14 08:34:00.829416392 +0000 UTC m=+0.365416229 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2025-07-21T16:28:53) Oct 14 04:34:00 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:34:00 localhost podman[81931]: 2025-10-14 08:34:00.992353278 +0000 UTC m=+0.530950984 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, batch=17.1_20250721.1) Oct 14 04:34:01 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:34:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:34:05 localhost podman[82028]: 2025-10-14 08:34:05.557155444 +0000 UTC m=+0.086098737 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team) Oct 14 04:34:05 localhost podman[82028]: 2025-10-14 08:34:05.765637095 +0000 UTC m=+0.294580398 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:34:05 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:34:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:34:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:34:21 localhost systemd[1]: tmp-crun.nuEmWv.mount: Deactivated successfully. Oct 14 04:34:21 localhost podman[82057]: 2025-10-14 08:34:21.568151825 +0000 UTC m=+0.111816247 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, version=17.1.9, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=2) Oct 14 04:34:21 localhost podman[82058]: 2025-10-14 08:34:21.52598135 +0000 UTC m=+0.069584040 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Oct 14 04:34:21 localhost podman[82057]: 2025-10-14 08:34:21.581302522 +0000 UTC m=+0.124966994 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.9, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 04:34:21 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:34:21 localhost podman[82058]: 2025-10-14 08:34:21.60615938 +0000 UTC m=+0.149762040 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:27:15, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:34:21 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:34:29 localhost podman[82142]: 2025-10-14 08:34:29.550902023 +0000 UTC m=+0.089539048 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 14 04:34:29 localhost podman[82142]: 2025-10-14 08:34:29.582120378 +0000 UTC m=+0.120757473 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, release=1) Oct 14 04:34:29 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:34:29 localhost podman[82143]: 2025-10-14 08:34:29.659491523 +0000 UTC m=+0.192665853 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:34:29 localhost podman[82143]: 2025-10-14 08:34:29.695260308 +0000 UTC m=+0.228434588 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1) Oct 14 04:34:29 localhost podman[82144]: 2025-10-14 08:34:29.713364957 +0000 UTC m=+0.245838619 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, distribution-scope=public) Oct 14 04:34:29 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:34:29 localhost podman[82144]: 2025-10-14 08:34:29.774233476 +0000 UTC m=+0.306707088 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 04:34:29 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:34:30 localhost systemd[1]: tmp-crun.xXB0qi.mount: Deactivated successfully. Oct 14 04:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:34:31 localhost podman[82214]: 2025-10-14 08:34:31.557375557 +0000 UTC m=+0.096109701 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, container_name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:34:31 localhost podman[82213]: 2025-10-14 08:34:31.604200485 +0000 UTC m=+0.146053421 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, release=1, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64) Oct 14 04:34:31 localhost podman[82213]: 2025-10-14 08:34:31.626882495 +0000 UTC m=+0.168735441 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:28:44, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller) Oct 14 04:34:31 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:34:31 localhost podman[82216]: 2025-10-14 08:34:31.72468539 +0000 UTC m=+0.256836260 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:34:31 localhost podman[82216]: 2025-10-14 08:34:31.75911906 +0000 UTC m=+0.291269940 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, tcib_managed=true, release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:34:31 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:34:31 localhost podman[82215]: 2025-10-14 08:34:31.771992751 +0000 UTC m=+0.304948202 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 14 04:34:31 localhost podman[82215]: 2025-10-14 08:34:31.805605229 +0000 UTC m=+0.338560670 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4) Oct 14 04:34:31 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:34:31 localhost podman[82214]: 2025-10-14 08:34:31.926768811 +0000 UTC m=+0.465502935 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, release=1) Oct 14 04:34:31 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:34:33 localhost python3[82323]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 04:34:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:34:36 localhost podman[82445]: 2025-10-14 08:34:36.550103215 +0000 UTC m=+0.085263355 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=metrics_qdr, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Oct 14 04:34:36 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 04:34:36 localhost podman[82445]: 2025-10-14 08:34:36.785423424 +0000 UTC m=+0.320583604 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:34:36 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:34:36 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 04:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:34:52 localhost podman[82618]: 2025-10-14 08:34:52.569985118 +0000 UTC m=+0.100105847 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9) Oct 14 04:34:52 localhost podman[82618]: 2025-10-14 08:34:52.583256759 +0000 UTC m=+0.113377498 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3) Oct 14 04:34:52 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:34:52 localhost podman[82617]: 2025-10-14 08:34:52.655000705 +0000 UTC m=+0.185422352 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=2, container_name=collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 04:34:52 localhost podman[82617]: 2025-10-14 08:34:52.69150674 +0000 UTC m=+0.221928387 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, tcib_managed=true, batch=17.1_20250721.1, release=2, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:34:52 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:35:00 localhost podman[82656]: 2025-10-14 08:35:00.540286358 +0000 UTC m=+0.079710418 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, release=1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:45:33, tcib_managed=true, vcs-type=git) Oct 14 04:35:00 localhost podman[82657]: 2025-10-14 08:35:00.591988305 +0000 UTC m=+0.129955217 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Oct 14 04:35:00 localhost podman[82656]: 2025-10-14 08:35:00.597305654 +0000 UTC m=+0.136729684 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, container_name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:35:00 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:35:00 localhost systemd[1]: tmp-crun.1gF2Hk.mount: Deactivated successfully. Oct 14 04:35:00 localhost podman[82658]: 2025-10-14 08:35:00.650790839 +0000 UTC m=+0.184705383 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:35:00 localhost podman[82657]: 2025-10-14 08:35:00.674664419 +0000 UTC m=+0.212631291 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, container_name=logrotate_crond, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:35:00 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:35:00 localhost podman[82658]: 2025-10-14 08:35:00.730031953 +0000 UTC m=+0.263946497 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1, tcib_managed=true) Oct 14 04:35:00 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:35:02 localhost systemd[1]: tmp-crun.BaB7AI.mount: Deactivated successfully. Oct 14 04:35:02 localhost podman[82725]: 2025-10-14 08:35:02.556459319 +0000 UTC m=+0.097533789 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:35:02 localhost podman[82728]: 2025-10-14 08:35:02.600049311 +0000 UTC m=+0.134089715 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1) Oct 14 04:35:02 localhost podman[82727]: 2025-10-14 08:35:02.65788064 +0000 UTC m=+0.192842949 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 14 04:35:02 localhost podman[82727]: 2025-10-14 08:35:02.705114998 +0000 UTC m=+0.240077267 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Oct 14 04:35:02 localhost podman[82728]: 2025-10-14 08:35:02.711389204 +0000 UTC m=+0.245429608 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 14 04:35:02 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:35:02 localhost podman[82725]: 2025-10-14 08:35:02.73316343 +0000 UTC m=+0.274237900 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 14 04:35:02 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:35:02 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:35:02 localhost podman[82726]: 2025-10-14 08:35:02.712963776 +0000 UTC m=+0.253291746 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:35:03 localhost podman[82726]: 2025-10-14 08:35:03.115142046 +0000 UTC m=+0.655470046 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:35:03 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:35:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:35:07 localhost systemd[1]: tmp-crun.NqjJ55.mount: Deactivated successfully. Oct 14 04:35:07 localhost podman[82818]: 2025-10-14 08:35:07.536372427 +0000 UTC m=+0.077049197 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64) Oct 14 04:35:07 localhost podman[82818]: 2025-10-14 08:35:07.734222086 +0000 UTC m=+0.274898826 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:35:07 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:35:16 localhost python3[82862]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 14 04:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:35:23 localhost podman[82863]: 2025-10-14 08:35:23.55804346 +0000 UTC m=+0.097649962 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, release=2, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:35:23 localhost podman[82864]: 2025-10-14 08:35:23.601391325 +0000 UTC m=+0.138616434 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:35:23 localhost podman[82864]: 2025-10-14 08:35:23.615134308 +0000 UTC m=+0.152359417 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, version=17.1.9, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1) Oct 14 04:35:23 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:35:23 localhost podman[82863]: 2025-10-14 08:35:23.670919953 +0000 UTC m=+0.210526445 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, container_name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 04:35:23 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:35:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:35:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:35:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:35:31 localhost podman[82946]: 2025-10-14 08:35:31.538853315 +0000 UTC m=+0.083954870 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:35:31 localhost podman[82947]: 2025-10-14 08:35:31.555165126 +0000 UTC m=+0.091758066 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 04:35:31 localhost podman[82947]: 2025-10-14 08:35:31.594440554 +0000 UTC m=+0.131033484 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:35:31 localhost podman[82948]: 2025-10-14 08:35:31.601176612 +0000 UTC m=+0.135531463 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_ipmi, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:35:31 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:35:31 localhost podman[82948]: 2025-10-14 08:35:31.632228873 +0000 UTC m=+0.166583764 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public) Oct 14 04:35:31 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:35:31 localhost podman[82946]: 2025-10-14 08:35:31.646407528 +0000 UTC m=+0.191509113 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, release=1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:35:31 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:35:33 localhost podman[83020]: 2025-10-14 08:35:33.595472295 +0000 UTC m=+0.132616136 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true) Oct 14 04:35:33 localhost podman[83021]: 2025-10-14 08:35:33.56879317 +0000 UTC m=+0.100680312 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:35:33 localhost podman[83018]: 2025-10-14 08:35:33.654339421 +0000 UTC m=+0.196504084 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 14 04:35:33 localhost podman[83020]: 2025-10-14 08:35:33.665140287 +0000 UTC m=+0.202284098 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:35:33 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:35:33 localhost podman[83019]: 2025-10-14 08:35:33.703170192 +0000 UTC m=+0.243167289 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container) Oct 14 04:35:33 localhost podman[83021]: 2025-10-14 08:35:33.703802048 +0000 UTC m=+0.235689170 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 14 04:35:33 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:35:33 localhost podman[83018]: 2025-10-14 08:35:33.753940483 +0000 UTC m=+0.296105106 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12) Oct 14 04:35:33 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:35:34 localhost podman[83019]: 2025-10-14 08:35:34.065790427 +0000 UTC m=+0.605787584 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:35:34 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:35:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:35:38 localhost podman[83115]: 2025-10-14 08:35:38.541888377 +0000 UTC m=+0.081175627 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 14 04:35:38 localhost podman[83115]: 2025-10-14 08:35:38.770179711 +0000 UTC m=+0.309466991 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd) Oct 14 04:35:38 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:35:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:35:49 localhost recover_tripleo_nova_virtqemud[83159]: 62532 Oct 14 04:35:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:35:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:35:50 localhost podman[83244]: 2025-10-14 08:35:50.206908863 +0000 UTC m=+0.089316431 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, ceph=True, version=7, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:35:50 localhost podman[83244]: 2025-10-14 08:35:50.308323074 +0000 UTC m=+0.190730642 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git) Oct 14 04:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:35:54 localhost podman[83385]: 2025-10-14 08:35:54.601976424 +0000 UTC m=+0.139595671 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, release=2) Oct 14 04:35:54 localhost podman[83386]: 2025-10-14 08:35:54.576933152 +0000 UTC m=+0.113178192 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:35:54 localhost podman[83385]: 2025-10-14 08:35:54.638402547 +0000 UTC m=+0.176021724 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, release=2, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1) Oct 14 04:35:54 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:35:54 localhost podman[83386]: 2025-10-14 08:35:54.660293755 +0000 UTC m=+0.196538755 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1) Oct 14 04:35:54 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:36:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:36:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:36:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:36:02 localhost systemd[1]: tmp-crun.V1qT2D.mount: Deactivated successfully. Oct 14 04:36:02 localhost podman[83427]: 2025-10-14 08:36:02.55784363 +0000 UTC m=+0.088721115 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=ceilometer_agent_ipmi) Oct 14 04:36:02 localhost systemd[1]: tmp-crun.aICXDG.mount: Deactivated successfully. Oct 14 04:36:02 localhost podman[83426]: 2025-10-14 08:36:02.603894748 +0000 UTC m=+0.136092749 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, release=1, vcs-type=git, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:36:02 localhost podman[83426]: 2025-10-14 08:36:02.615962267 +0000 UTC m=+0.148160318 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, container_name=logrotate_crond, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron) Oct 14 04:36:02 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:36:02 localhost podman[83427]: 2025-10-14 08:36:02.666434351 +0000 UTC m=+0.197311886 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, distribution-scope=public, release=1, vendor=Red Hat, Inc.) Oct 14 04:36:02 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:36:02 localhost podman[83425]: 2025-10-14 08:36:02.749036315 +0000 UTC m=+0.282882718 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1) Oct 14 04:36:02 localhost podman[83425]: 2025-10-14 08:36:02.780144407 +0000 UTC m=+0.313990860 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:36:02 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:36:04 localhost systemd[1]: tmp-crun.LVSwOz.mount: Deactivated successfully. Oct 14 04:36:04 localhost podman[83498]: 2025-10-14 08:36:04.561533943 +0000 UTC m=+0.095948558 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:36:04 localhost podman[83499]: 2025-10-14 08:36:04.605005031 +0000 UTC m=+0.136520050 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12) Oct 14 04:36:04 localhost podman[83498]: 2025-10-14 08:36:04.614285476 +0000 UTC m=+0.148700091 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Oct 14 04:36:04 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:36:04 localhost podman[83500]: 2025-10-14 08:36:04.658425633 +0000 UTC m=+0.187479087 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Oct 14 04:36:04 localhost podman[83501]: 2025-10-14 08:36:04.718598474 +0000 UTC m=+0.243276971 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 14 04:36:04 localhost podman[83500]: 2025-10-14 08:36:04.73548689 +0000 UTC m=+0.264540314 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=) Oct 14 04:36:04 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:36:04 localhost podman[83501]: 2025-10-14 08:36:04.774974004 +0000 UTC m=+0.299652461 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, architecture=x86_64, container_name=nova_compute) Oct 14 04:36:04 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:36:04 localhost podman[83499]: 2025-10-14 08:36:04.96813533 +0000 UTC m=+0.499650359 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Oct 14 04:36:04 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:36:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:36:09 localhost podman[83593]: 2025-10-14 08:36:09.53879996 +0000 UTC m=+0.079609285 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:36:09 localhost podman[83593]: 2025-10-14 08:36:09.768139152 +0000 UTC m=+0.308948477 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 14 04:36:09 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:36:16 localhost systemd[1]: session-34.scope: Deactivated successfully. Oct 14 04:36:16 localhost systemd[1]: session-34.scope: Consumed 19.202s CPU time. Oct 14 04:36:16 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Oct 14 04:36:16 localhost systemd-logind[760]: Removed session 34. Oct 14 04:36:24 localhost rhsm-service[6494]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 14 04:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:36:25 localhost systemd[1]: tmp-crun.g2Mpwu.mount: Deactivated successfully. Oct 14 04:36:25 localhost podman[83799]: 2025-10-14 08:36:25.559591368 +0000 UTC m=+0.101574726 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:36:25 localhost podman[83799]: 2025-10-14 08:36:25.594602044 +0000 UTC m=+0.136585452 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-collectd) Oct 14 04:36:25 localhost systemd[1]: tmp-crun.9mQAFS.mount: Deactivated successfully. Oct 14 04:36:25 localhost podman[83800]: 2025-10-14 08:36:25.602020819 +0000 UTC m=+0.137482455 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64) Oct 14 04:36:25 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:36:25 localhost podman[83800]: 2025-10-14 08:36:25.610709278 +0000 UTC m=+0.146170954 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, tcib_managed=true, release=1, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:36:25 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:36:33 localhost podman[83883]: 2025-10-14 08:36:33.545942601 +0000 UTC m=+0.080116469 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, batch=17.1_20250721.1, container_name=logrotate_crond, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:36:33 localhost podman[83883]: 2025-10-14 08:36:33.557132996 +0000 UTC m=+0.091306854 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Oct 14 04:36:33 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:36:33 localhost podman[83884]: 2025-10-14 08:36:33.610896367 +0000 UTC m=+0.141429629 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, release=1, version=17.1.9, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:36:33 localhost podman[83884]: 2025-10-14 08:36:33.673289546 +0000 UTC m=+0.203822768 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Oct 14 04:36:33 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:36:33 localhost podman[83882]: 2025-10-14 08:36:33.676995725 +0000 UTC m=+0.210564377 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 14 04:36:33 localhost podman[83882]: 2025-10-14 08:36:33.761254732 +0000 UTC m=+0.294823394 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9) Oct 14 04:36:33 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:36:34 localhost systemd[1]: tmp-crun.o1a6Aw.mount: Deactivated successfully. Oct 14 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:36:35 localhost systemd[1]: tmp-crun.C3Fm3o.mount: Deactivated successfully. Oct 14 04:36:35 localhost podman[83956]: 2025-10-14 08:36:35.558941677 +0000 UTC m=+0.090604606 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4) Oct 14 04:36:35 localhost podman[83955]: 2025-10-14 08:36:35.614433613 +0000 UTC m=+0.148566688 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:36:35 localhost podman[83959]: 2025-10-14 08:36:35.666932431 +0000 UTC m=+0.192349825 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container) Oct 14 04:36:35 localhost podman[83955]: 2025-10-14 08:36:35.69564275 +0000 UTC m=+0.229775845 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.openshift.expose-services=) Oct 14 04:36:35 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:36:35 localhost podman[83957]: 2025-10-14 08:36:35.713379189 +0000 UTC m=+0.240060986 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_metadata_agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1) Oct 14 04:36:35 localhost podman[83959]: 2025-10-14 08:36:35.724326609 +0000 UTC m=+0.249743993 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, version=17.1.9, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:36:35 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:36:35 localhost podman[83957]: 2025-10-14 08:36:35.758080761 +0000 UTC m=+0.284762558 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:36:35 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:36:35 localhost podman[83956]: 2025-10-14 08:36:35.934253257 +0000 UTC m=+0.465916196 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, distribution-scope=public, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:36:35 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:36:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:36:40 localhost podman[84048]: 2025-10-14 08:36:40.536886892 +0000 UTC m=+0.075293481 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:36:40 localhost podman[84048]: 2025-10-14 08:36:40.764240852 +0000 UTC m=+0.302647471 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 14 04:36:40 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:36:56 localhost podman[84156]: 2025-10-14 08:36:56.550368033 +0000 UTC m=+0.087854183 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, container_name=iscsid, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:36:56 localhost podman[84155]: 2025-10-14 08:36:56.602943223 +0000 UTC m=+0.140437944 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, tcib_managed=true, release=2, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:36:56 localhost podman[84155]: 2025-10-14 08:36:56.616203393 +0000 UTC m=+0.153698124 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, name=rhosp17/openstack-collectd, architecture=x86_64, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 04:36:56 localhost podman[84156]: 2025-10-14 08:36:56.616664625 +0000 UTC m=+0.154150745 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, distribution-scope=public, release=1, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:36:56 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:36:56 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:37:04 localhost systemd[1]: tmp-crun.FU1hNK.mount: Deactivated successfully. Oct 14 04:37:04 localhost podman[84193]: 2025-10-14 08:37:04.558528712 +0000 UTC m=+0.097546599 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1, version=17.1.9, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:37:04 localhost podman[84194]: 2025-10-14 08:37:04.610396473 +0000 UTC m=+0.146753069 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 14 04:37:04 localhost podman[84194]: 2025-10-14 08:37:04.625198664 +0000 UTC m=+0.161555270 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:37:04 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:37:04 localhost podman[84193]: 2025-10-14 08:37:04.639214825 +0000 UTC m=+0.178232642 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true) Oct 14 04:37:04 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:37:04 localhost podman[84195]: 2025-10-14 08:37:04.708404384 +0000 UTC m=+0.241432653 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, release=1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:37:04 localhost podman[84195]: 2025-10-14 08:37:04.768278316 +0000 UTC m=+0.301306625 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, release=1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git) Oct 14 04:37:04 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:37:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:37:06 localhost podman[84264]: 2025-10-14 08:37:06.54770646 +0000 UTC m=+0.087227357 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37) Oct 14 04:37:06 localhost podman[84263]: 2025-10-14 08:37:06.583710791 +0000 UTC m=+0.123400963 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 14 04:37:06 localhost podman[84263]: 2025-10-14 08:37:06.599582621 +0000 UTC m=+0.139272803 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.9, container_name=ovn_controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1) Oct 14 04:37:06 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:37:06 localhost podman[84265]: 2025-10-14 08:37:06.639614299 +0000 UTC m=+0.175251053 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:37:06 localhost podman[84266]: 2025-10-14 08:37:06.68694279 +0000 UTC m=+0.218533157 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5) Oct 14 04:37:06 localhost podman[84265]: 2025-10-14 08:37:06.709241979 +0000 UTC m=+0.244878783 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:37:06 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:37:06 localhost podman[84266]: 2025-10-14 08:37:06.765952588 +0000 UTC m=+0.297542915 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, batch=17.1_20250721.1) Oct 14 04:37:06 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:37:06 localhost podman[84264]: 2025-10-14 08:37:06.880128946 +0000 UTC m=+0.419649913 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:37:06 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:37:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:37:11 localhost podman[84355]: 2025-10-14 08:37:11.542204893 +0000 UTC m=+0.081427763 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:37:11 localhost podman[84355]: 2025-10-14 08:37:11.76415456 +0000 UTC m=+0.303377480 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12) Oct 14 04:37:11 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:37:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:37:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:37:27 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:37:27 localhost recover_tripleo_nova_virtqemud[84391]: 62532 Oct 14 04:37:27 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:37:27 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:37:27 localhost podman[84383]: 2025-10-14 08:37:27.621461952 +0000 UTC m=+0.158670374 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, release=2, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible) Oct 14 04:37:27 localhost systemd[1]: tmp-crun.K4XmoT.mount: Deactivated successfully. Oct 14 04:37:27 localhost podman[84384]: 2025-10-14 08:37:27.637182138 +0000 UTC m=+0.162890936 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:27:15, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1, distribution-scope=public) Oct 14 04:37:27 localhost podman[84384]: 2025-10-14 08:37:27.647929893 +0000 UTC m=+0.173638651 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, release=1, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=) Oct 14 04:37:27 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:37:27 localhost podman[84383]: 2025-10-14 08:37:27.662024945 +0000 UTC m=+0.199233257 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, container_name=collectd, release=2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 14 04:37:27 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:37:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:37:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:37:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:37:35 localhost podman[84468]: 2025-10-14 08:37:35.546861104 +0000 UTC m=+0.085994153 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Oct 14 04:37:35 localhost podman[84468]: 2025-10-14 08:37:35.582419184 +0000 UTC m=+0.121552183 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:37:35 localhost systemd[1]: tmp-crun.QcWUy9.mount: Deactivated successfully. Oct 14 04:37:35 localhost podman[84469]: 2025-10-14 08:37:35.600893313 +0000 UTC m=+0.139128939 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:37:35 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:37:35 localhost podman[84469]: 2025-10-14 08:37:35.611931854 +0000 UTC m=+0.150167520 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, release=1, container_name=logrotate_crond, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:37:35 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:37:35 localhost podman[84470]: 2025-10-14 08:37:35.697810024 +0000 UTC m=+0.231275393 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:37:35 localhost podman[84470]: 2025-10-14 08:37:35.758142369 +0000 UTC m=+0.291607738 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47) Oct 14 04:37:35 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:37:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:37:37 localhost systemd[1]: tmp-crun.oc8XrP.mount: Deactivated successfully. Oct 14 04:37:37 localhost podman[84542]: 2025-10-14 08:37:37.538576469 +0000 UTC m=+0.082022829 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1, tcib_managed=true, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, version=17.1.9) Oct 14 04:37:37 localhost systemd[1]: tmp-crun.iOW1AI.mount: Deactivated successfully. Oct 14 04:37:37 localhost podman[84544]: 2025-10-14 08:37:37.605379634 +0000 UTC m=+0.146022970 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Oct 14 04:37:37 localhost podman[84543]: 2025-10-14 08:37:37.653148587 +0000 UTC m=+0.194576004 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc.) Oct 14 04:37:37 localhost podman[84544]: 2025-10-14 08:37:37.658437267 +0000 UTC m=+0.199080643 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:37:37 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:37:37 localhost podman[84541]: 2025-10-14 08:37:37.751926678 +0000 UTC m=+0.294874445 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:37:37 localhost podman[84543]: 2025-10-14 08:37:37.772172484 +0000 UTC m=+0.313599951 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64) Oct 14 04:37:37 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:37:37 localhost podman[84541]: 2025-10-14 08:37:37.785812293 +0000 UTC m=+0.328760050 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Oct 14 04:37:37 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:37:37 localhost podman[84542]: 2025-10-14 08:37:37.906564936 +0000 UTC m=+0.450011316 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, release=1) Oct 14 04:37:37 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:37:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:37:42 localhost podman[84636]: 2025-10-14 08:37:42.546766373 +0000 UTC m=+0.086419705 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:37:42 localhost podman[84636]: 2025-10-14 08:37:42.794393898 +0000 UTC m=+0.334047170 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, batch=17.1_20250721.1, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12) Oct 14 04:37:42 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:37:58 localhost podman[84742]: 2025-10-14 08:37:58.543553946 +0000 UTC m=+0.083049156 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_id=tripleo_step3, distribution-scope=public, release=1, tcib_managed=true, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.expose-services=) Oct 14 04:37:58 localhost podman[84742]: 2025-10-14 08:37:58.558029348 +0000 UTC m=+0.097524528 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, version=17.1.9) Oct 14 04:37:58 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:37:58 localhost podman[84741]: 2025-10-14 08:37:58.654299414 +0000 UTC m=+0.195439677 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:37:58 localhost podman[84741]: 2025-10-14 08:37:58.669007842 +0000 UTC m=+0.210148085 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.buildah.version=1.33.12) Oct 14 04:37:58 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:38:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:38:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:38:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:38:06 localhost podman[84782]: 2025-10-14 08:38:06.584599615 +0000 UTC m=+0.121499833 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc.) Oct 14 04:38:06 localhost podman[84782]: 2025-10-14 08:38:06.595089212 +0000 UTC m=+0.131989490 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-cron, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, version=17.1.9, config_id=tripleo_step4, distribution-scope=public) Oct 14 04:38:06 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:38:06 localhost systemd[1]: tmp-crun.rwNZ5B.mount: Deactivated successfully. Oct 14 04:38:06 localhost podman[84783]: 2025-10-14 08:38:06.662225136 +0000 UTC m=+0.195915159 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:38:06 localhost podman[84783]: 2025-10-14 08:38:06.68998003 +0000 UTC m=+0.223670053 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 14 04:38:06 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:38:06 localhost systemd[1]: tmp-crun.lFOG57.mount: Deactivated successfully. Oct 14 04:38:06 localhost podman[84781]: 2025-10-14 08:38:06.757792483 +0000 UTC m=+0.296603701 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 14 04:38:06 localhost podman[84781]: 2025-10-14 08:38:06.813160706 +0000 UTC m=+0.351971854 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:38:06 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:38:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:38:08 localhost systemd[1]: tmp-crun.LQkC4Z.mount: Deactivated successfully. Oct 14 04:38:08 localhost podman[84848]: 2025-10-14 08:38:08.564014253 +0000 UTC m=+0.106542037 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container) Oct 14 04:38:08 localhost podman[84849]: 2025-10-14 08:38:08.589555118 +0000 UTC m=+0.129649897 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, managed_by=tripleo_ansible) Oct 14 04:38:08 localhost podman[84848]: 2025-10-14 08:38:08.644582183 +0000 UTC m=+0.187110017 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:38:08 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:38:08 localhost podman[84851]: 2025-10-14 08:38:08.649333689 +0000 UTC m=+0.183170053 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, tcib_managed=true) Oct 14 04:38:08 localhost podman[84851]: 2025-10-14 08:38:08.744357691 +0000 UTC m=+0.278194105 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5) Oct 14 04:38:08 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:38:08 localhost podman[84850]: 2025-10-14 08:38:08.707676031 +0000 UTC m=+0.241406222 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=ovn_metadata_agent) Oct 14 04:38:08 localhost podman[84850]: 2025-10-14 08:38:08.791172778 +0000 UTC m=+0.324902999 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, container_name=ovn_metadata_agent) Oct 14 04:38:08 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:38:08 localhost podman[84849]: 2025-10-14 08:38:08.992198561 +0000 UTC m=+0.532293400 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, version=17.1.9, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 14 04:38:09 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:38:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:38:13 localhost systemd[1]: tmp-crun.Zt65G3.mount: Deactivated successfully. Oct 14 04:38:13 localhost podman[84944]: 2025-10-14 08:38:13.559213735 +0000 UTC m=+0.097442345 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9) Oct 14 04:38:13 localhost podman[84944]: 2025-10-14 08:38:13.752118804 +0000 UTC m=+0.290347414 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:38:13 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:38:29 localhost systemd[1]: tmp-crun.gHKGpV.mount: Deactivated successfully. Oct 14 04:38:29 localhost podman[84997]: 2025-10-14 08:38:29.590767176 +0000 UTC m=+0.134245369 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:38:29 localhost podman[84997]: 2025-10-14 08:38:29.603103743 +0000 UTC m=+0.146581936 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9) Oct 14 04:38:29 localhost podman[84998]: 2025-10-14 08:38:29.562332255 +0000 UTC m=+0.102454240 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:38:29 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:38:29 localhost podman[84998]: 2025-10-14 08:38:29.64504458 +0000 UTC m=+0.185166525 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64) Oct 14 04:38:29 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:38:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:38:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:38:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:38:37 localhost systemd[1]: tmp-crun.yQCJ3P.mount: Deactivated successfully. Oct 14 04:38:37 localhost podman[85056]: 2025-10-14 08:38:37.567181978 +0000 UTC m=+0.103756404 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 14 04:38:37 localhost podman[85056]: 2025-10-14 08:38:37.602130202 +0000 UTC m=+0.138704598 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:38:37 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:38:37 localhost podman[85057]: 2025-10-14 08:38:37.657768493 +0000 UTC m=+0.190990879 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:38:37 localhost systemd[1]: tmp-crun.yQLaOy.mount: Deactivated successfully. Oct 14 04:38:37 localhost podman[85057]: 2025-10-14 08:38:37.69208365 +0000 UTC m=+0.225306086 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 04:38:37 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:38:37 localhost podman[85055]: 2025-10-14 08:38:37.711817831 +0000 UTC m=+0.251275943 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=) Oct 14 04:38:37 localhost podman[85055]: 2025-10-14 08:38:37.764691559 +0000 UTC m=+0.304149711 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:38:37 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:38:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:38:39 localhost podman[85126]: 2025-10-14 08:38:39.544577075 +0000 UTC m=+0.083634802 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 14 04:38:39 localhost podman[85127]: 2025-10-14 08:38:39.604385976 +0000 UTC m=+0.141939973 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 14 04:38:39 localhost systemd[1]: tmp-crun.ORuITz.mount: Deactivated successfully. Oct 14 04:38:39 localhost podman[85129]: 2025-10-14 08:38:39.656030461 +0000 UTC m=+0.187664491 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:38:39 localhost podman[85126]: 2025-10-14 08:38:39.670157824 +0000 UTC m=+0.209215541 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible) Oct 14 04:38:39 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:38:39 localhost podman[85129]: 2025-10-14 08:38:39.686925577 +0000 UTC m=+0.218559577 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 14 04:38:39 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:38:39 localhost podman[85128]: 2025-10-14 08:38:39.754917594 +0000 UTC m=+0.288875256 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, distribution-scope=public) Oct 14 04:38:39 localhost podman[85128]: 2025-10-14 08:38:39.788012689 +0000 UTC m=+0.321970341 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:38:39 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:38:39 localhost podman[85127]: 2025-10-14 08:38:39.976991765 +0000 UTC m=+0.514545692 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:38:39 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:38:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:38:44 localhost systemd[1]: tmp-crun.wAvV3Q.mount: Deactivated successfully. Oct 14 04:38:44 localhost podman[85225]: 2025-10-14 08:38:44.554190458 +0000 UTC m=+0.093986396 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.33.12, release=1, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:38:44 localhost podman[85225]: 2025-10-14 08:38:44.765343418 +0000 UTC m=+0.305139366 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.9, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:38:44 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:39:00 localhost systemd[1]: tmp-crun.3nxEcb.mount: Deactivated successfully. Oct 14 04:39:00 localhost podman[85331]: 2025-10-14 08:39:00.567016814 +0000 UTC m=+0.097140959 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, container_name=iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 14 04:39:00 localhost podman[85330]: 2025-10-14 08:39:00.613913062 +0000 UTC m=+0.144078588 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, version=17.1.9, release=2, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.33.12) Oct 14 04:39:00 localhost podman[85330]: 2025-10-14 08:39:00.625226931 +0000 UTC m=+0.155392497 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, build-date=2025-07-21T13:04:03, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 14 04:39:00 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:39:00 localhost podman[85331]: 2025-10-14 08:39:00.708293947 +0000 UTC m=+0.238418092 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 14 04:39:00 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:39:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:39:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:39:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:39:08 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:39:08 localhost recover_tripleo_nova_virtqemud[85388]: 62532 Oct 14 04:39:08 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:39:08 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:39:08 localhost podman[85371]: 2025-10-14 08:39:08.571017512 +0000 UTC m=+0.095058163 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 14 04:39:08 localhost systemd[1]: tmp-crun.fmDq5g.mount: Deactivated successfully. Oct 14 04:39:08 localhost podman[85369]: 2025-10-14 08:39:08.625172293 +0000 UTC m=+0.152734897 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.9, release=1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:39:08 localhost podman[85371]: 2025-10-14 08:39:08.628589983 +0000 UTC m=+0.152630544 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, architecture=x86_64) Oct 14 04:39:08 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:39:08 localhost podman[85369]: 2025-10-14 08:39:08.655885445 +0000 UTC m=+0.183448039 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4) Oct 14 04:39:08 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:39:08 localhost podman[85370]: 2025-10-14 08:39:08.724735205 +0000 UTC m=+0.250164404 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:39:08 localhost podman[85370]: 2025-10-14 08:39:08.738127739 +0000 UTC m=+0.263557028 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:39:08 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:39:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:39:10 localhost systemd[1]: tmp-crun.eFIP2c.mount: Deactivated successfully. Oct 14 04:39:10 localhost podman[85445]: 2025-10-14 08:39:10.555905505 +0000 UTC m=+0.090066061 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12) Oct 14 04:39:10 localhost systemd[1]: tmp-crun.tYEyrX.mount: Deactivated successfully. Oct 14 04:39:10 localhost podman[85458]: 2025-10-14 08:39:10.602535758 +0000 UTC m=+0.123947857 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1) Oct 14 04:39:10 localhost podman[85444]: 2025-10-14 08:39:10.630862097 +0000 UTC m=+0.165104995 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, release=1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44) Oct 14 04:39:10 localhost podman[85458]: 2025-10-14 08:39:10.654097571 +0000 UTC m=+0.175509670 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:39:10 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:39:10 localhost podman[85444]: 2025-10-14 08:39:10.677969282 +0000 UTC m=+0.212212170 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:39:10 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:39:10 localhost podman[85446]: 2025-10-14 08:39:10.724091391 +0000 UTC m=+0.251838478 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:39:10 localhost podman[85446]: 2025-10-14 08:39:10.769162872 +0000 UTC m=+0.296909909 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 14 04:39:10 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:39:10 localhost podman[85445]: 2025-10-14 08:39:10.920709499 +0000 UTC m=+0.454870145 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:39:10 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:39:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:39:15 localhost podman[85540]: 2025-10-14 08:39:15.569756419 +0000 UTC m=+0.115092913 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container) Oct 14 04:39:15 localhost podman[85540]: 2025-10-14 08:39:15.760296806 +0000 UTC m=+0.305633280 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:39:15 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:39:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:39:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:39:31 localhost podman[85616]: 2025-10-14 08:39:31.545404315 +0000 UTC m=+0.087382020 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, release=1, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible) Oct 14 04:39:31 localhost podman[85616]: 2025-10-14 08:39:31.553789267 +0000 UTC m=+0.095767032 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container) Oct 14 04:39:31 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:39:31 localhost podman[85615]: 2025-10-14 08:39:31.646765634 +0000 UTC m=+0.190925147 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, distribution-scope=public, release=2) Oct 14 04:39:31 localhost podman[85615]: 2025-10-14 08:39:31.654504249 +0000 UTC m=+0.198663812 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, release=2, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd) Oct 14 04:39:31 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:39:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:39:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:39:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:39:39 localhost podman[85650]: 2025-10-14 08:39:39.539631006 +0000 UTC m=+0.082394199 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:39:39 localhost podman[85651]: 2025-10-14 08:39:39.59842829 +0000 UTC m=+0.138894102 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, container_name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, distribution-scope=public) Oct 14 04:39:39 localhost podman[85651]: 2025-10-14 08:39:39.637195694 +0000 UTC m=+0.177661456 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:39:39 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:39:39 localhost podman[85652]: 2025-10-14 08:39:39.653419253 +0000 UTC m=+0.187145028 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9) Oct 14 04:39:39 localhost podman[85650]: 2025-10-14 08:39:39.671324977 +0000 UTC m=+0.214088150 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:39:39 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:39:39 localhost podman[85652]: 2025-10-14 08:39:39.706279981 +0000 UTC m=+0.240005736 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 04:39:39 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:39:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:39:41 localhost podman[85724]: 2025-10-14 08:39:41.548021241 +0000 UTC m=+0.084479204 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.9) Oct 14 04:39:41 localhost systemd[1]: tmp-crun.2zpTbm.mount: Deactivated successfully. Oct 14 04:39:41 localhost podman[85725]: 2025-10-14 08:39:41.600892369 +0000 UTC m=+0.133305454 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:39:41 localhost podman[85726]: 2025-10-14 08:39:41.661678286 +0000 UTC m=+0.193702341 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64) Oct 14 04:39:41 localhost podman[85724]: 2025-10-14 08:39:41.671927357 +0000 UTC m=+0.208385300 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:39:41 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:39:41 localhost podman[85727]: 2025-10-14 08:39:41.72236057 +0000 UTC m=+0.248056398 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:39:41 localhost podman[85726]: 2025-10-14 08:39:41.750282738 +0000 UTC m=+0.282306843 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4) Oct 14 04:39:41 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:39:41 localhost podman[85727]: 2025-10-14 08:39:41.806257437 +0000 UTC m=+0.331953265 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, release=1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute) Oct 14 04:39:41 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:39:41 localhost podman[85725]: 2025-10-14 08:39:41.967951401 +0000 UTC m=+0.500364476 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, distribution-scope=public, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:39:41 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:39:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:39:46 localhost podman[85820]: 2025-10-14 08:39:46.536383973 +0000 UTC m=+0.078798684 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1) Oct 14 04:39:46 localhost podman[85820]: 2025-10-14 08:39:46.754406696 +0000 UTC m=+0.296821487 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:39:46 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:40:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:40:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:40:02 localhost podman[85926]: 2025-10-14 08:40:02.620609448 +0000 UTC m=+0.151016203 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, vcs-type=git) Oct 14 04:40:02 localhost podman[85927]: 2025-10-14 08:40:02.581822902 +0000 UTC m=+0.112096013 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 04:40:02 localhost podman[85926]: 2025-10-14 08:40:02.636254251 +0000 UTC m=+0.166661036 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, release=2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:40:02 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:40:02 localhost podman[85927]: 2025-10-14 08:40:02.666266764 +0000 UTC m=+0.196539865 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-type=git, release=1, maintainer=OpenStack TripleO Team) Oct 14 04:40:02 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:40:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:40:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:40:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:40:10 localhost systemd[1]: tmp-crun.XfUTmr.mount: Deactivated successfully. Oct 14 04:40:10 localhost podman[85962]: 2025-10-14 08:40:10.569525451 +0000 UTC m=+0.102101389 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, architecture=x86_64, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:40:10 localhost systemd[1]: tmp-crun.tC1LEF.mount: Deactivated successfully. Oct 14 04:40:10 localhost podman[85963]: 2025-10-14 08:40:10.671234219 +0000 UTC m=+0.203716685 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible) Oct 14 04:40:10 localhost podman[85963]: 2025-10-14 08:40:10.684009557 +0000 UTC m=+0.216492083 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:40:10 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:40:10 localhost podman[85964]: 2025-10-14 08:40:10.635034093 +0000 UTC m=+0.166438710 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:40:10 localhost podman[85964]: 2025-10-14 08:40:10.769086466 +0000 UTC m=+0.300491043 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true) Oct 14 04:40:10 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:40:10 localhost podman[85962]: 2025-10-14 08:40:10.787495963 +0000 UTC m=+0.320071871 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, architecture=x86_64, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Oct 14 04:40:10 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:40:12 localhost systemd[1]: tmp-crun.yle8Xw.mount: Deactivated successfully. Oct 14 04:40:12 localhost podman[86033]: 2025-10-14 08:40:12.552793563 +0000 UTC m=+0.081521506 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=) Oct 14 04:40:12 localhost podman[86032]: 2025-10-14 08:40:12.570391318 +0000 UTC m=+0.099900411 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., release=1) Oct 14 04:40:12 localhost podman[86032]: 2025-10-14 08:40:12.622857865 +0000 UTC m=+0.152366968 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, release=1, distribution-scope=public, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Oct 14 04:40:12 localhost systemd[1]: tmp-crun.mBRuTg.mount: Deactivated successfully. Oct 14 04:40:12 localhost podman[86035]: 2025-10-14 08:40:12.634840401 +0000 UTC m=+0.155510391 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step5, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:40:12 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:40:12 localhost podman[86035]: 2025-10-14 08:40:12.666132459 +0000 UTC m=+0.186802459 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:40:12 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:40:12 localhost podman[86034]: 2025-10-14 08:40:12.734832294 +0000 UTC m=+0.257920608 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 14 04:40:12 localhost podman[86034]: 2025-10-14 08:40:12.783400798 +0000 UTC m=+0.306489052 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 14 04:40:12 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:40:12 localhost podman[86033]: 2025-10-14 08:40:12.941066376 +0000 UTC m=+0.469794339 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, batch=17.1_20250721.1, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:40:12 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:40:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:40:17 localhost podman[86125]: 2025-10-14 08:40:17.548836307 +0000 UTC m=+0.090334090 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, version=17.1.9, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git) Oct 14 04:40:17 localhost podman[86125]: 2025-10-14 08:40:17.746387368 +0000 UTC m=+0.287885201 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, release=1, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 04:40:17 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:40:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:40:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:40:33 localhost podman[86200]: 2025-10-14 08:40:33.54986455 +0000 UTC m=+0.088378047 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, container_name=iscsid, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:40:33 localhost podman[86200]: 2025-10-14 08:40:33.565535884 +0000 UTC m=+0.104049411 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:40:33 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:40:33 localhost podman[86199]: 2025-10-14 08:40:33.653115209 +0000 UTC m=+0.193221178 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:40:33 localhost podman[86199]: 2025-10-14 08:40:33.666110473 +0000 UTC m=+0.206216442 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9) Oct 14 04:40:33 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:40:35 localhost sshd[86238]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:40:41 localhost podman[86240]: 2025-10-14 08:40:41.55498646 +0000 UTC m=+0.092003493 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Oct 14 04:40:41 localhost podman[86240]: 2025-10-14 08:40:41.584168371 +0000 UTC m=+0.121185404 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 14 04:40:41 localhost systemd[1]: tmp-crun.boytI8.mount: Deactivated successfully. Oct 14 04:40:41 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:40:41 localhost podman[86242]: 2025-10-14 08:40:41.607822356 +0000 UTC m=+0.139795696 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, release=1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:40:41 localhost podman[86241]: 2025-10-14 08:40:41.663292053 +0000 UTC m=+0.196963208 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:40:41 localhost podman[86241]: 2025-10-14 08:40:41.701079861 +0000 UTC m=+0.234750966 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Oct 14 04:40:41 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:40:41 localhost podman[86242]: 2025-10-14 08:40:41.718388528 +0000 UTC m=+0.250361928 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:40:41 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:40:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:40:43 localhost podman[86317]: 2025-10-14 08:40:43.555280651 +0000 UTC m=+0.085890612 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:40:43 localhost podman[86318]: 2025-10-14 08:40:43.606577536 +0000 UTC m=+0.129892264 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team) Oct 14 04:40:43 localhost podman[86316]: 2025-10-14 08:40:43.6558713 +0000 UTC m=+0.188043532 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, tcib_managed=true, vcs-type=git, version=17.1.9, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:40:43 localhost podman[86318]: 2025-10-14 08:40:43.667091056 +0000 UTC m=+0.190405784 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, architecture=x86_64, container_name=nova_compute) Oct 14 04:40:43 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:40:43 localhost podman[86317]: 2025-10-14 08:40:43.682436691 +0000 UTC m=+0.213046642 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, release=1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_metadata_agent, io.buildah.version=1.33.12) Oct 14 04:40:43 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:40:43 localhost podman[86315]: 2025-10-14 08:40:43.763933305 +0000 UTC m=+0.296830906 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, architecture=x86_64, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 14 04:40:43 localhost podman[86315]: 2025-10-14 08:40:43.812011647 +0000 UTC m=+0.344909228 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:40:43 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:40:44 localhost podman[86316]: 2025-10-14 08:40:44.038281547 +0000 UTC m=+0.570453769 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:40:44 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:40:48 localhost podman[86411]: 2025-10-14 08:40:48.532215651 +0000 UTC m=+0.075918089 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step1) Oct 14 04:40:48 localhost podman[86411]: 2025-10-14 08:40:48.757279429 +0000 UTC m=+0.300981897 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:40:48 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:40:58 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:40:58 localhost recover_tripleo_nova_virtqemud[86456]: 62532 Oct 14 04:40:58 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:40:58 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:41:04 localhost podman[86518]: 2025-10-14 08:41:04.536510671 +0000 UTC m=+0.084221138 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, vcs-type=git, build-date=2025-07-21T13:04:03) Oct 14 04:41:04 localhost systemd[1]: tmp-crun.17sLqu.mount: Deactivated successfully. Oct 14 04:41:04 localhost podman[86519]: 2025-10-14 08:41:04.591648388 +0000 UTC m=+0.137219708 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container) Oct 14 04:41:04 localhost podman[86518]: 2025-10-14 08:41:04.599344801 +0000 UTC m=+0.147055328 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:41:04 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:41:04 localhost podman[86519]: 2025-10-14 08:41:04.65566108 +0000 UTC m=+0.201232440 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12) Oct 14 04:41:04 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:41:12 localhost systemd[1]: tmp-crun.YSqHZE.mount: Deactivated successfully. Oct 14 04:41:12 localhost podman[86557]: 2025-10-14 08:41:12.568080268 +0000 UTC m=+0.107976586 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Oct 14 04:41:12 localhost podman[86558]: 2025-10-14 08:41:12.601412428 +0000 UTC m=+0.138855151 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, distribution-scope=public, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, com.redhat.component=openstack-cron-container) Oct 14 04:41:12 localhost podman[86557]: 2025-10-14 08:41:12.653202917 +0000 UTC m=+0.193099285 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, distribution-scope=public, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:41:12 localhost podman[86559]: 2025-10-14 08:41:12.67183063 +0000 UTC m=+0.200702156 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git) Oct 14 04:41:12 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:41:12 localhost podman[86558]: 2025-10-14 08:41:12.689812285 +0000 UTC m=+0.227254968 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git) Oct 14 04:41:12 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:41:12 localhost podman[86559]: 2025-10-14 08:41:12.733290775 +0000 UTC m=+0.262162331 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:41:12 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:41:14 localhost podman[86629]: 2025-10-14 08:41:14.552852148 +0000 UTC m=+0.069953040 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 14 04:41:14 localhost podman[86629]: 2025-10-14 08:41:14.573129294 +0000 UTC m=+0.090230166 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, tcib_managed=true) Oct 14 04:41:14 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:41:14 localhost systemd[1]: tmp-crun.gMmjgx.mount: Deactivated successfully. Oct 14 04:41:14 localhost podman[86631]: 2025-10-14 08:41:14.620611049 +0000 UTC m=+0.126950456 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, tcib_managed=true, release=1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:41:14 localhost podman[86631]: 2025-10-14 08:41:14.676705092 +0000 UTC m=+0.183044499 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Oct 14 04:41:14 localhost podman[86638]: 2025-10-14 08:41:14.686893151 +0000 UTC m=+0.191237745 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:41:14 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:41:14 localhost podman[86638]: 2025-10-14 08:41:14.719176924 +0000 UTC m=+0.223521568 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.) Oct 14 04:41:14 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:41:14 localhost podman[86630]: 2025-10-14 08:41:14.726922159 +0000 UTC m=+0.235968278 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Oct 14 04:41:15 localhost podman[86630]: 2025-10-14 08:41:15.084318426 +0000 UTC m=+0.593364605 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=nova_migration_target, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 14 04:41:15 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:41:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:41:19 localhost podman[86723]: 2025-10-14 08:41:19.562307687 +0000 UTC m=+0.105794348 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:41:19 localhost podman[86723]: 2025-10-14 08:41:19.755999736 +0000 UTC m=+0.299486357 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.expose-services=, architecture=x86_64) Oct 14 04:41:19 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:41:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:41:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:41:35 localhost podman[86795]: 2025-10-14 08:41:35.538070275 +0000 UTC m=+0.078015374 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, architecture=x86_64, container_name=iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:41:35 localhost systemd[1]: tmp-crun.7jmgPO.mount: Deactivated successfully. Oct 14 04:41:35 localhost podman[86794]: 2025-10-14 08:41:35.601583194 +0000 UTC m=+0.144300056 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 04:41:35 localhost podman[86794]: 2025-10-14 08:41:35.613093118 +0000 UTC m=+0.155809970 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, managed_by=tripleo_ansible, version=17.1.9, container_name=collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:41:35 localhost podman[86795]: 2025-10-14 08:41:35.624826048 +0000 UTC m=+0.164771117 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, release=1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:41:35 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:41:35 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:41:43 localhost podman[86833]: 2025-10-14 08:41:43.552688215 +0000 UTC m=+0.090498333 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:41:43 localhost podman[86833]: 2025-10-14 08:41:43.594041268 +0000 UTC m=+0.131851386 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, version=17.1.9) Oct 14 04:41:43 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:41:43 localhost podman[86834]: 2025-10-14 08:41:43.608705365 +0000 UTC m=+0.143554285 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:41:43 localhost podman[86834]: 2025-10-14 08:41:43.645127228 +0000 UTC m=+0.179976138 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:41:43 localhost systemd[1]: tmp-crun.QvNAAW.mount: Deactivated successfully. Oct 14 04:41:43 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:41:43 localhost podman[86835]: 2025-10-14 08:41:43.66487013 +0000 UTC m=+0.194680376 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 04:41:43 localhost podman[86835]: 2025-10-14 08:41:43.691055711 +0000 UTC m=+0.220865977 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, version=17.1.9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12) Oct 14 04:41:43 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:41:45 localhost podman[86905]: 2025-10-14 08:41:45.547859859 +0000 UTC m=+0.086209660 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, release=1, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:41:45 localhost podman[86905]: 2025-10-14 08:41:45.595699184 +0000 UTC m=+0.134048985 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, release=1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Oct 14 04:41:45 localhost podman[86907]: 2025-10-14 08:41:45.607804744 +0000 UTC m=+0.139864238 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1) Oct 14 04:41:45 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:41:45 localhost podman[86908]: 2025-10-14 08:41:45.651819627 +0000 UTC m=+0.182326270 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=) Oct 14 04:41:45 localhost podman[86908]: 2025-10-14 08:41:45.679044587 +0000 UTC m=+0.209551210 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step5) Oct 14 04:41:45 localhost podman[86906]: 2025-10-14 08:41:45.702694393 +0000 UTC m=+0.235983810 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, release=1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:41:45 localhost podman[86907]: 2025-10-14 08:41:45.70337347 +0000 UTC m=+0.235432914 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible) Oct 14 04:41:45 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:41:45 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:41:46 localhost podman[86906]: 2025-10-14 08:41:46.068332017 +0000 UTC m=+0.601621514 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:41:46 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:41:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:41:50 localhost podman[87000]: 2025-10-14 08:41:50.535834201 +0000 UTC m=+0.077614113 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=) Oct 14 04:41:50 localhost podman[87000]: 2025-10-14 08:41:50.776684947 +0000 UTC m=+0.318464819 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, distribution-scope=public) Oct 14 04:41:50 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:41:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:41:59 localhost recover_tripleo_nova_virtqemud[87060]: 62532 Oct 14 04:41:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:41:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:42:06 localhost podman[87109]: 2025-10-14 08:42:06.62623518 +0000 UTC m=+0.157330441 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.33.12, release=2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03) Oct 14 04:42:06 localhost podman[87110]: 2025-10-14 08:42:06.584955668 +0000 UTC m=+0.116180792 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:42:06 localhost podman[87110]: 2025-10-14 08:42:06.671224409 +0000 UTC m=+0.202449513 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 04:42:06 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:42:06 localhost podman[87109]: 2025-10-14 08:42:06.693486447 +0000 UTC m=+0.224581668 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=) Oct 14 04:42:06 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:42:14 localhost podman[87146]: 2025-10-14 08:42:14.534104847 +0000 UTC m=+0.074486790 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-cron, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52) Oct 14 04:42:14 localhost podman[87146]: 2025-10-14 08:42:14.548330433 +0000 UTC m=+0.088712376 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:42:14 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:42:14 localhost systemd[1]: tmp-crun.N4ALFp.mount: Deactivated successfully. Oct 14 04:42:14 localhost podman[87147]: 2025-10-14 08:42:14.650936545 +0000 UTC m=+0.191492752 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9) Oct 14 04:42:14 localhost podman[87145]: 2025-10-14 08:42:14.611046221 +0000 UTC m=+0.150918850 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:45:33, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 14 04:42:14 localhost podman[87145]: 2025-10-14 08:42:14.691087186 +0000 UTC m=+0.230959765 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1) Oct 14 04:42:14 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:42:14 localhost podman[87147]: 2025-10-14 08:42:14.708151927 +0000 UTC m=+0.248708084 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:42:14 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:42:16 localhost podman[87216]: 2025-10-14 08:42:16.560406476 +0000 UTC m=+0.097485638 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, container_name=ovn_controller, version=17.1.9, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:42:16 localhost podman[87216]: 2025-10-14 08:42:16.614090044 +0000 UTC m=+0.151169186 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible) Oct 14 04:42:16 localhost podman[87218]: 2025-10-14 08:42:16.611116196 +0000 UTC m=+0.140934857 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:42:16 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:42:16 localhost podman[87218]: 2025-10-14 08:42:16.686251082 +0000 UTC m=+0.216069753 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, config_id=tripleo_step4) Oct 14 04:42:16 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:42:16 localhost podman[87219]: 2025-10-14 08:42:16.668834321 +0000 UTC m=+0.195538909 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 14 04:42:16 localhost podman[87217]: 2025-10-14 08:42:16.768793163 +0000 UTC m=+0.299772944 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:42:16 localhost podman[87219]: 2025-10-14 08:42:16.799355491 +0000 UTC m=+0.326060049 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:42:16 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:42:17 localhost podman[87217]: 2025-10-14 08:42:17.177112346 +0000 UTC m=+0.708092107 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 14 04:42:17 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:42:17 localhost systemd[1]: tmp-crun.840dyz.mount: Deactivated successfully. Oct 14 04:42:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:42:21 localhost podman[87314]: 2025-10-14 08:42:21.536924734 +0000 UTC m=+0.082300277 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 04:42:21 localhost podman[87314]: 2025-10-14 08:42:21.769168863 +0000 UTC m=+0.314544366 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, release=1) Oct 14 04:42:21 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:42:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:42:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:42:37 localhost systemd[1]: tmp-crun.7BJyg2.mount: Deactivated successfully. Oct 14 04:42:37 localhost podman[87386]: 2025-10-14 08:42:37.555205603 +0000 UTC m=+0.095667389 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=collectd, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1) Oct 14 04:42:37 localhost podman[87386]: 2025-10-14 08:42:37.589433468 +0000 UTC m=+0.129895204 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 04:42:37 localhost systemd[1]: tmp-crun.stoC8M.mount: Deactivated successfully. Oct 14 04:42:37 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:42:37 localhost podman[87387]: 2025-10-14 08:42:37.594539883 +0000 UTC m=+0.132948045 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, release=1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public) Oct 14 04:42:37 localhost podman[87387]: 2025-10-14 08:42:37.695940023 +0000 UTC m=+0.234348155 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=) Oct 14 04:42:37 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:42:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:42:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5646 writes, 25K keys, 5646 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5646 writes, 702 syncs, 8.04 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 424 writes, 1900 keys, 424 commit groups, 1.0 writes per commit group, ingest: 2.31 MB, 0.00 MB/s#012Interval WAL: 424 writes, 136 syncs, 3.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:42:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:42:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:42:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:42:45 localhost podman[87425]: 2025-10-14 08:42:45.550486582 +0000 UTC m=+0.090978165 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, architecture=x86_64, config_id=tripleo_step4) Oct 14 04:42:45 localhost systemd[1]: tmp-crun.6jjuhX.mount: Deactivated successfully. Oct 14 04:42:45 localhost podman[87426]: 2025-10-14 08:42:45.598609814 +0000 UTC m=+0.135827871 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:42:45 localhost podman[87426]: 2025-10-14 08:42:45.605711302 +0000 UTC m=+0.142929339 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, release=1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:42:45 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:42:45 localhost podman[87427]: 2025-10-14 08:42:45.64988479 +0000 UTC m=+0.184214270 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:42:45 localhost podman[87425]: 2025-10-14 08:42:45.65594019 +0000 UTC m=+0.196431723 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, release=1, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:42:45 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:42:45 localhost podman[87427]: 2025-10-14 08:42:45.675095296 +0000 UTC m=+0.209424796 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=) Oct 14 04:42:45 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:42:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:42:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4827 writes, 21K keys, 4827 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4827 writes, 653 syncs, 7.39 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 536 writes, 1959 keys, 536 commit groups, 1.0 writes per commit group, ingest: 2.50 MB, 0.00 MB/s#012Interval WAL: 536 writes, 203 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:42:47 localhost systemd[1]: tmp-crun.OYltj3.mount: Deactivated successfully. Oct 14 04:42:47 localhost podman[87499]: 2025-10-14 08:42:47.54538819 +0000 UTC m=+0.078590558 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:42:47 localhost podman[87497]: 2025-10-14 08:42:47.611777975 +0000 UTC m=+0.146614845 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64) Oct 14 04:42:47 localhost podman[87498]: 2025-10-14 08:42:47.652833821 +0000 UTC m=+0.184256461 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:42:47 localhost podman[87497]: 2025-10-14 08:42:47.683469861 +0000 UTC m=+0.218306731 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:42:47 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:42:47 localhost podman[87499]: 2025-10-14 08:42:47.77842099 +0000 UTC m=+0.311623318 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:42:47 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:42:47 localhost podman[87500]: 2025-10-14 08:42:47.587987746 +0000 UTC m=+0.110178873 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, version=17.1.9) Oct 14 04:42:47 localhost podman[87500]: 2025-10-14 08:42:47.861983168 +0000 UTC m=+0.384174255 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 04:42:47 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:42:48 localhost podman[87498]: 2025-10-14 08:42:48.034327354 +0000 UTC m=+0.565749984 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, release=1) Oct 14 04:42:48 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:42:48 localhost systemd[1]: tmp-crun.MUZWsf.mount: Deactivated successfully. Oct 14 04:42:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:42:52 localhost podman[87596]: 2025-10-14 08:42:52.542829812 +0000 UTC m=+0.087248027 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-qdrouterd) Oct 14 04:42:52 localhost podman[87596]: 2025-10-14 08:42:52.767540351 +0000 UTC m=+0.311958596 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12) Oct 14 04:42:52 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:43:08 localhost systemd[1]: tmp-crun.n8MnGr.mount: Deactivated successfully. Oct 14 04:43:08 localhost podman[87703]: 2025-10-14 08:43:08.557709632 +0000 UTC m=+0.095595808 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Oct 14 04:43:08 localhost podman[87703]: 2025-10-14 08:43:08.601199661 +0000 UTC m=+0.139085837 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, vcs-type=git, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, release=2) Oct 14 04:43:08 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:43:08 localhost podman[87704]: 2025-10-14 08:43:08.60344036 +0000 UTC m=+0.139175989 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:43:08 localhost podman[87704]: 2025-10-14 08:43:08.684222836 +0000 UTC m=+0.219958465 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 14 04:43:08 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:43:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:43:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:43:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:43:16 localhost podman[87743]: 2025-10-14 08:43:16.554759229 +0000 UTC m=+0.085297536 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:43:16 localhost podman[87743]: 2025-10-14 08:43:16.590965586 +0000 UTC m=+0.121503923 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 14 04:43:16 localhost podman[87744]: 2025-10-14 08:43:16.603582479 +0000 UTC m=+0.132753280 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container) Oct 14 04:43:16 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:43:16 localhost podman[87744]: 2025-10-14 08:43:16.617045065 +0000 UTC m=+0.146215876 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git) Oct 14 04:43:16 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:43:16 localhost systemd[1]: tmp-crun.Zk5nTq.mount: Deactivated successfully. Oct 14 04:43:16 localhost podman[87745]: 2025-10-14 08:43:16.670020775 +0000 UTC m=+0.195895819 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, release=1) Oct 14 04:43:16 localhost podman[87745]: 2025-10-14 08:43:16.72507883 +0000 UTC m=+0.250953734 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12) Oct 14 04:43:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:43:18 localhost systemd[1]: tmp-crun.8mwzJ2.mount: Deactivated successfully. Oct 14 04:43:18 localhost podman[87819]: 2025-10-14 08:43:18.539074787 +0000 UTC m=+0.070250377 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 04:43:18 localhost podman[87813]: 2025-10-14 08:43:18.651079358 +0000 UTC m=+0.183567654 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git) Oct 14 04:43:18 localhost podman[87820]: 2025-10-14 08:43:18.627502135 +0000 UTC m=+0.154878115 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:43:18 localhost podman[87812]: 2025-10-14 08:43:18.60232998 +0000 UTC m=+0.143371422 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1) Oct 14 04:43:18 localhost podman[87820]: 2025-10-14 08:43:18.706961205 +0000 UTC m=+0.234337145 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1) Oct 14 04:43:18 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:43:18 localhost podman[87819]: 2025-10-14 08:43:18.73063247 +0000 UTC m=+0.261808030 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T16:28:53, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:43:18 localhost podman[87812]: 2025-10-14 08:43:18.731086663 +0000 UTC m=+0.272128095 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, container_name=ovn_controller, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:43:18 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:43:18 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:43:19 localhost podman[87813]: 2025-10-14 08:43:19.008094984 +0000 UTC m=+0.540583300 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 14 04:43:19 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:43:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:43:23 localhost systemd[1]: tmp-crun.q0RAno.mount: Deactivated successfully. Oct 14 04:43:23 localhost podman[87907]: 2025-10-14 08:43:23.557157013 +0000 UTC m=+0.097554239 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:43:23 localhost podman[87907]: 2025-10-14 08:43:23.760992501 +0000 UTC m=+0.301389667 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team) Oct 14 04:43:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:43:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:43:32 localhost recover_tripleo_nova_virtqemud[87937]: 62532 Oct 14 04:43:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:43:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:43:39 localhost systemd[1]: tmp-crun.9OPgWQ.mount: Deactivated successfully. Oct 14 04:43:39 localhost podman[87983]: 2025-10-14 08:43:39.552904259 +0000 UTC m=+0.093264796 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:43:39 localhost podman[87983]: 2025-10-14 08:43:39.56919264 +0000 UTC m=+0.109553187 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:43:39 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:43:39 localhost podman[87982]: 2025-10-14 08:43:39.652242575 +0000 UTC m=+0.194376919 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, managed_by=tripleo_ansible, architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, version=17.1.9) Oct 14 04:43:39 localhost podman[87982]: 2025-10-14 08:43:39.690190507 +0000 UTC m=+0.232324841 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, build-date=2025-07-21T13:04:03, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, container_name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1) Oct 14 04:43:39 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:43:40 localhost systemd[1]: tmp-crun.2xUqGi.mount: Deactivated successfully. Oct 14 04:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:43:47 localhost podman[88023]: 2025-10-14 08:43:47.548341311 +0000 UTC m=+0.084112754 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1) Oct 14 04:43:47 localhost podman[88023]: 2025-10-14 08:43:47.601487826 +0000 UTC m=+0.137259219 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64) Oct 14 04:43:47 localhost podman[88021]: 2025-10-14 08:43:47.605972885 +0000 UTC m=+0.142862247 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, release=1, vcs-type=git, container_name=ceilometer_agent_compute) Oct 14 04:43:47 localhost podman[88022]: 2025-10-14 08:43:47.661183745 +0000 UTC m=+0.196694971 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, container_name=logrotate_crond) Oct 14 04:43:47 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:43:47 localhost podman[88021]: 2025-10-14 08:43:47.664205744 +0000 UTC m=+0.201095116 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible) Oct 14 04:43:47 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:43:47 localhost podman[88022]: 2025-10-14 08:43:47.72268906 +0000 UTC m=+0.258200336 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, name=rhosp17/openstack-cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 04:43:47 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:43:49 localhost systemd[1]: tmp-crun.GU7mKV.mount: Deactivated successfully. Oct 14 04:43:49 localhost podman[88093]: 2025-10-14 08:43:49.535186287 +0000 UTC m=+0.080382036 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, distribution-scope=public, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:43:49 localhost systemd[1]: tmp-crun.R9sffK.mount: Deactivated successfully. Oct 14 04:43:49 localhost podman[88101]: 2025-10-14 08:43:49.554130837 +0000 UTC m=+0.085770427 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:43:49 localhost podman[88094]: 2025-10-14 08:43:49.566623178 +0000 UTC m=+0.106607089 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1) Oct 14 04:43:49 localhost podman[88093]: 2025-10-14 08:43:49.591168107 +0000 UTC m=+0.136363946 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 04:43:49 localhost podman[88095]: 2025-10-14 08:43:49.598822728 +0000 UTC m=+0.133194791 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 14 04:43:49 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:43:49 localhost podman[88101]: 2025-10-14 08:43:49.63405624 +0000 UTC m=+0.165695870 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 14 04:43:49 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:43:49 localhost podman[88095]: 2025-10-14 08:43:49.652435515 +0000 UTC m=+0.186807638 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:43:49 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:43:49 localhost podman[88094]: 2025-10-14 08:43:49.87162303 +0000 UTC m=+0.411606961 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4) Oct 14 04:43:49 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:43:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:43:54 localhost podman[88190]: 2025-10-14 08:43:54.53733039 +0000 UTC m=+0.083117022 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, release=1, config_id=tripleo_step1, vcs-type=git) Oct 14 04:43:54 localhost podman[88190]: 2025-10-14 08:43:54.732990356 +0000 UTC m=+0.278776988 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:43:54 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:44:10 localhost podman[88345]: 2025-10-14 08:44:10.551711951 +0000 UTC m=+0.093237342 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, container_name=collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:44:10 localhost podman[88345]: 2025-10-14 08:44:10.595248368 +0000 UTC m=+0.136773769 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:44:10 localhost podman[88346]: 2025-10-14 08:44:10.594478078 +0000 UTC m=+0.131199371 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.expose-services=, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:44:10 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:44:10 localhost podman[88346]: 2025-10-14 08:44:10.682189051 +0000 UTC m=+0.218910334 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:44:10 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:44:18 localhost systemd[1]: tmp-crun.HD8jLo.mount: Deactivated successfully. Oct 14 04:44:18 localhost systemd[1]: tmp-crun.GFqnQc.mount: Deactivated successfully. Oct 14 04:44:18 localhost podman[88384]: 2025-10-14 08:44:18.589873172 +0000 UTC m=+0.126900287 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:44:18 localhost podman[88384]: 2025-10-14 08:44:18.669773768 +0000 UTC m=+0.206800893 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, release=1) Oct 14 04:44:18 localhost podman[88385]: 2025-10-14 08:44:18.625631364 +0000 UTC m=+0.161791306 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:44:18 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:44:18 localhost podman[88385]: 2025-10-14 08:44:18.705818906 +0000 UTC m=+0.241978858 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-cron-container) Oct 14 04:44:18 localhost podman[88386]: 2025-10-14 08:44:18.718502824 +0000 UTC m=+0.245497092 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, release=1, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Oct 14 04:44:18 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:44:18 localhost podman[88386]: 2025-10-14 08:44:18.772182562 +0000 UTC m=+0.299176840 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1) Oct 14 04:44:18 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:44:20 localhost systemd[1]: tmp-crun.pw5Nn9.mount: Deactivated successfully. Oct 14 04:44:20 localhost podman[88455]: 2025-10-14 08:44:20.572226327 +0000 UTC m=+0.100986837 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team) Oct 14 04:44:20 localhost podman[88458]: 2025-10-14 08:44:20.62122559 +0000 UTC m=+0.142892052 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, container_name=nova_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:44:20 localhost podman[88456]: 2025-10-14 08:44:20.592687291 +0000 UTC m=+0.117998160 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:44:20 localhost podman[88455]: 2025-10-14 08:44:20.672590787 +0000 UTC m=+0.201351267 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:44:20 localhost podman[88458]: 2025-10-14 08:44:20.675596857 +0000 UTC m=+0.197263279 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, container_name=nova_compute) Oct 14 04:44:20 localhost podman[88457]: 2025-10-14 08:44:20.684605347 +0000 UTC m=+0.206837894 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Oct 14 04:44:20 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:44:20 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:44:20 localhost podman[88457]: 2025-10-14 08:44:20.757214438 +0000 UTC m=+0.279446915 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:44:20 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:44:20 localhost podman[88456]: 2025-10-14 08:44:20.990218187 +0000 UTC m=+0.515529026 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:44:21 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:44:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:44:25 localhost systemd[1]: tmp-crun.jwM29o.mount: Deactivated successfully. Oct 14 04:44:25 localhost podman[88546]: 2025-10-14 08:44:25.55649848 +0000 UTC m=+0.096240931 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, distribution-scope=public) Oct 14 04:44:25 localhost podman[88546]: 2025-10-14 08:44:25.771676245 +0000 UTC m=+0.311418726 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, release=1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:44:25 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:44:41 localhost podman[88619]: 2025-10-14 08:44:41.557181416 +0000 UTC m=+0.089340108 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:44:41 localhost podman[88619]: 2025-10-14 08:44:41.595122485 +0000 UTC m=+0.127281127 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Oct 14 04:44:41 localhost systemd[1]: tmp-crun.9H40bL.mount: Deactivated successfully. Oct 14 04:44:41 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:44:41 localhost podman[88618]: 2025-10-14 08:44:41.617573762 +0000 UTC m=+0.152320653 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.component=openstack-collectd-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, release=2, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=collectd, tcib_managed=true) Oct 14 04:44:41 localhost podman[88618]: 2025-10-14 08:44:41.65543715 +0000 UTC m=+0.190184001 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, version=17.1.9, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:44:41 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:44:49 localhost systemd[1]: tmp-crun.qxFt05.mount: Deactivated successfully. Oct 14 04:44:49 localhost podman[88658]: 2025-10-14 08:44:49.563060711 +0000 UTC m=+0.094562806 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, version=17.1.9, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 04:44:49 localhost podman[88658]: 2025-10-14 08:44:49.57502411 +0000 UTC m=+0.106526405 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 14 04:44:49 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:44:49 localhost podman[88657]: 2025-10-14 08:44:49.653534378 +0000 UTC m=+0.187718284 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, name=rhosp17/openstack-ceilometer-compute) Oct 14 04:44:49 localhost podman[88657]: 2025-10-14 08:44:49.708231553 +0000 UTC m=+0.242415489 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:44:49 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:44:49 localhost podman[88659]: 2025-10-14 08:44:49.712015744 +0000 UTC m=+0.239092742 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Oct 14 04:44:49 localhost podman[88659]: 2025-10-14 08:44:49.793153973 +0000 UTC m=+0.320230971 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:44:49 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:44:51 localhost systemd[1]: tmp-crun.TSB3IC.mount: Deactivated successfully. Oct 14 04:44:51 localhost podman[88732]: 2025-10-14 08:44:51.529486702 +0000 UTC m=+0.066495849 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:44:51 localhost podman[88731]: 2025-10-14 08:44:51.556607254 +0000 UTC m=+0.093168729 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:48:37, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:44:51 localhost podman[88732]: 2025-10-14 08:44:51.61996611 +0000 UTC m=+0.156975247 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:44:51 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:44:51 localhost podman[88733]: 2025-10-14 08:44:51.589854988 +0000 UTC m=+0.124981356 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:44:51 localhost podman[88730]: 2025-10-14 08:44:51.71136496 +0000 UTC m=+0.248019278 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44) Oct 14 04:44:51 localhost podman[88733]: 2025-10-14 08:44:51.722405164 +0000 UTC m=+0.257531572 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step5, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 04:44:51 localhost podman[88730]: 2025-10-14 08:44:51.736060228 +0000 UTC m=+0.272714536 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public) Oct 14 04:44:51 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:44:51 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:44:51 localhost podman[88731]: 2025-10-14 08:44:51.898347515 +0000 UTC m=+0.434909000 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute) Oct 14 04:44:51 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:44:52 localhost systemd[1]: tmp-crun.N4GyEJ.mount: Deactivated successfully. Oct 14 04:44:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:44:56 localhost podman[88826]: 2025-10-14 08:44:56.549334532 +0000 UTC m=+0.088707350 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, tcib_managed=true, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:44:56 localhost podman[88826]: 2025-10-14 08:44:56.736159203 +0000 UTC m=+0.275532051 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=) Oct 14 04:44:56 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:45:12 localhost podman[88934]: 2025-10-14 08:45:12.555476774 +0000 UTC m=+0.089098471 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 04:45:12 localhost podman[88934]: 2025-10-14 08:45:12.596315561 +0000 UTC m=+0.129937258 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:45:12 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:45:12 localhost podman[88933]: 2025-10-14 08:45:12.616001894 +0000 UTC m=+0.149466787 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=2, architecture=x86_64, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true) Oct 14 04:45:12 localhost podman[88933]: 2025-10-14 08:45:12.625037645 +0000 UTC m=+0.158502528 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:45:12 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:45:20 localhost systemd[1]: tmp-crun.hREfyZ.mount: Deactivated successfully. Oct 14 04:45:20 localhost podman[88973]: 2025-10-14 08:45:20.551367743 +0000 UTC m=+0.091609347 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, container_name=logrotate_crond, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 14 04:45:20 localhost podman[88972]: 2025-10-14 08:45:20.610193668 +0000 UTC m=+0.152211620 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:45:20 localhost podman[88973]: 2025-10-14 08:45:20.614807021 +0000 UTC m=+0.155048665 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, release=1, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 04:45:20 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:45:20 localhost podman[88972]: 2025-10-14 08:45:20.661535494 +0000 UTC m=+0.203553476 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, release=1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 14 04:45:20 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:45:20 localhost podman[88974]: 2025-10-14 08:45:20.665159651 +0000 UTC m=+0.198582464 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, version=17.1.9, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:45:20 localhost podman[88974]: 2025-10-14 08:45:20.745084426 +0000 UTC m=+0.278507199 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, release=1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4) Oct 14 04:45:20 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:45:21 localhost systemd[1]: tmp-crun.WotJyr.mount: Deactivated successfully. Oct 14 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:45:22 localhost podman[89043]: 2025-10-14 08:45:22.551859121 +0000 UTC m=+0.086479491 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Oct 14 04:45:22 localhost systemd[1]: tmp-crun.VOn3e0.mount: Deactivated successfully. Oct 14 04:45:22 localhost podman[89045]: 2025-10-14 08:45:22.624939695 +0000 UTC m=+0.151851751 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=) Oct 14 04:45:22 localhost podman[89045]: 2025-10-14 08:45:22.67020837 +0000 UTC m=+0.197120416 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 14 04:45:22 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:45:22 localhost podman[89043]: 2025-10-14 08:45:22.698342158 +0000 UTC m=+0.232962528 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, distribution-scope=public) Oct 14 04:45:22 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:45:22 localhost podman[89044]: 2025-10-14 08:45:22.71757223 +0000 UTC m=+0.249965411 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible) Oct 14 04:45:22 localhost podman[89051]: 2025-10-14 08:45:22.67475466 +0000 UTC m=+0.197538665 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, release=1) Oct 14 04:45:22 localhost podman[89051]: 2025-10-14 08:45:22.7581843 +0000 UTC m=+0.280968315 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, release=1, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 04:45:22 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:45:23 localhost podman[89044]: 2025-10-14 08:45:23.079109778 +0000 UTC m=+0.611502989 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, distribution-scope=public) Oct 14 04:45:23 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:45:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:45:27 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:45:27 localhost recover_tripleo_nova_virtqemud[89145]: 62532 Oct 14 04:45:27 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:45:27 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:45:27 localhost podman[89138]: 2025-10-14 08:45:27.555462399 +0000 UTC m=+0.097186837 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true) Oct 14 04:45:27 localhost podman[89138]: 2025-10-14 08:45:27.746041739 +0000 UTC m=+0.287766187 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:45:27 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:45:43 localhost podman[89192]: 2025-10-14 08:45:43.555559769 +0000 UTC m=+0.088455585 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, container_name=collectd, version=17.1.9, managed_by=tripleo_ansible, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:45:43 localhost podman[89192]: 2025-10-14 08:45:43.561037525 +0000 UTC m=+0.093933301 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, version=17.1.9, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, release=2, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, architecture=x86_64, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:45:43 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:45:43 localhost systemd[1]: tmp-crun.s1A8V5.mount: Deactivated successfully. Oct 14 04:45:43 localhost podman[89193]: 2025-10-14 08:45:43.600183086 +0000 UTC m=+0.128337875 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64) Oct 14 04:45:43 localhost podman[89193]: 2025-10-14 08:45:43.611046035 +0000 UTC m=+0.139200864 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.9, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:45:43 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:45:51 localhost podman[89232]: 2025-10-14 08:45:51.540776644 +0000 UTC m=+0.079423494 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 14 04:45:51 localhost systemd[1]: tmp-crun.uGfd29.mount: Deactivated successfully. Oct 14 04:45:51 localhost podman[89233]: 2025-10-14 08:45:51.603314438 +0000 UTC m=+0.138036773 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64) Oct 14 04:45:51 localhost podman[89233]: 2025-10-14 08:45:51.642540541 +0000 UTC m=+0.177262796 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-cron-container) Oct 14 04:45:51 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:45:51 localhost podman[89234]: 2025-10-14 08:45:51.656225106 +0000 UTC m=+0.188045804 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:45:51 localhost podman[89232]: 2025-10-14 08:45:51.674238974 +0000 UTC m=+0.212885874 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:45:51 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:45:51 localhost podman[89234]: 2025-10-14 08:45:51.688076763 +0000 UTC m=+0.219897451 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi) Oct 14 04:45:51 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:45:52 localhost systemd[1]: tmp-crun.GtxcbS.mount: Deactivated successfully. Oct 14 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:45:53 localhost systemd[1]: tmp-crun.uHjhny.mount: Deactivated successfully. Oct 14 04:45:53 localhost podman[89304]: 2025-10-14 08:45:53.553082456 +0000 UTC m=+0.089535733 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 14 04:45:53 localhost podman[89307]: 2025-10-14 08:45:53.565605819 +0000 UTC m=+0.096469447 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5) Oct 14 04:45:53 localhost podman[89307]: 2025-10-14 08:45:53.595120944 +0000 UTC m=+0.125984572 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1, container_name=nova_compute, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 04:45:53 localhost podman[89303]: 2025-10-14 08:45:53.608566862 +0000 UTC m=+0.148379408 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:28:44, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:45:53 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:45:53 localhost podman[89303]: 2025-10-14 08:45:53.630987639 +0000 UTC m=+0.170800135 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, maintainer=OpenStack TripleO Team) Oct 14 04:45:53 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:45:53 localhost podman[89305]: 2025-10-14 08:45:53.71640207 +0000 UTC m=+0.247191066 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:45:53 localhost podman[89305]: 2025-10-14 08:45:53.777230009 +0000 UTC m=+0.308018965 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:45:53 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:45:53 localhost podman[89304]: 2025-10-14 08:45:53.912188549 +0000 UTC m=+0.448641866 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:45:53 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:45:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:45:58 localhost podman[89396]: 2025-10-14 08:45:58.552815471 +0000 UTC m=+0.091585476 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64) Oct 14 04:45:58 localhost podman[89396]: 2025-10-14 08:45:58.749111063 +0000 UTC m=+0.287881048 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:45:58 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:46:09 localhost podman[89524]: 2025-10-14 08:46:09.987661854 +0000 UTC m=+0.104129331 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, version=7) Oct 14 04:46:10 localhost podman[89524]: 2025-10-14 08:46:10.112864735 +0000 UTC m=+0.229332232 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 04:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:46:14 localhost systemd[1]: tmp-crun.30i8kD.mount: Deactivated successfully. Oct 14 04:46:14 localhost podman[89666]: 2025-10-14 08:46:14.583632378 +0000 UTC m=+0.121961996 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 14 04:46:14 localhost podman[89666]: 2025-10-14 08:46:14.59722716 +0000 UTC m=+0.135556768 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, release=2, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:46:14 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:46:14 localhost podman[89667]: 2025-10-14 08:46:14.548184055 +0000 UTC m=+0.086981405 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, distribution-scope=public, release=1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:46:14 localhost podman[89667]: 2025-10-14 08:46:14.680508386 +0000 UTC m=+0.219305746 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.9, release=1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Oct 14 04:46:14 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:46:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:46:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:46:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:46:22 localhost podman[89706]: 2025-10-14 08:46:22.538789765 +0000 UTC m=+0.074168805 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:46:22 localhost podman[89707]: 2025-10-14 08:46:22.55779573 +0000 UTC m=+0.087760906 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:46:22 localhost podman[89706]: 2025-10-14 08:46:22.575159122 +0000 UTC m=+0.110538092 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:46:22 localhost podman[89707]: 2025-10-14 08:46:22.584306525 +0000 UTC m=+0.114271691 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vendor=Red Hat, Inc.) Oct 14 04:46:22 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:46:22 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:46:22 localhost podman[89705]: 2025-10-14 08:46:22.651324158 +0000 UTC m=+0.186928074 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9) Oct 14 04:46:22 localhost podman[89705]: 2025-10-14 08:46:22.678122441 +0000 UTC m=+0.213726307 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:46:22 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:46:24 localhost podman[89778]: 2025-10-14 08:46:24.580952271 +0000 UTC m=+0.119672015 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1) Oct 14 04:46:24 localhost podman[89784]: 2025-10-14 08:46:24.59218397 +0000 UTC m=+0.121563825 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, release=1, vendor=Red Hat, Inc., config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 14 04:46:24 localhost podman[89777]: 2025-10-14 08:46:24.599618597 +0000 UTC m=+0.140737305 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:46:24 localhost podman[89779]: 2025-10-14 08:46:24.657038765 +0000 UTC m=+0.189395730 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 04:46:24 localhost podman[89777]: 2025-10-14 08:46:24.6741344 +0000 UTC m=+0.215253078 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vcs-type=git) Oct 14 04:46:24 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:46:24 localhost podman[89779]: 2025-10-14 08:46:24.699481784 +0000 UTC m=+0.231838709 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:46:24 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:46:24 localhost podman[89784]: 2025-10-14 08:46:24.725728942 +0000 UTC m=+0.255108827 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Oct 14 04:46:24 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:46:24 localhost podman[89778]: 2025-10-14 08:46:24.935183324 +0000 UTC m=+0.473903058 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20250721.1, architecture=x86_64) Oct 14 04:46:24 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:46:29 localhost podman[89874]: 2025-10-14 08:46:29.543959798 +0000 UTC m=+0.085450434 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd) Oct 14 04:46:29 localhost podman[89874]: 2025-10-14 08:46:29.775185909 +0000 UTC m=+0.316676535 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 04:46:29 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:46:45 localhost podman[89927]: 2025-10-14 08:46:45.548778314 +0000 UTC m=+0.085144177 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, container_name=iscsid, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:46:45 localhost podman[89927]: 2025-10-14 08:46:45.563119695 +0000 UTC m=+0.099485538 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, architecture=x86_64) Oct 14 04:46:45 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:46:45 localhost systemd[1]: tmp-crun.RK6UzZ.mount: Deactivated successfully. Oct 14 04:46:45 localhost podman[89926]: 2025-10-14 08:46:45.663407542 +0000 UTC m=+0.202346593 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, distribution-scope=public) Oct 14 04:46:45 localhost podman[89926]: 2025-10-14 08:46:45.676243664 +0000 UTC m=+0.215182775 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, version=17.1.9, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Oct 14 04:46:45 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:46:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:46:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:46:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:46:53 localhost systemd[1]: tmp-crun.ruK1jr.mount: Deactivated successfully. Oct 14 04:46:53 localhost podman[89965]: 2025-10-14 08:46:53.562048926 +0000 UTC m=+0.098036029 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 04:46:53 localhost podman[89965]: 2025-10-14 08:46:53.598168267 +0000 UTC m=+0.134155340 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:46:53 localhost systemd[1]: tmp-crun.otK9ni.mount: Deactivated successfully. Oct 14 04:46:53 localhost podman[89966]: 2025-10-14 08:46:53.611791859 +0000 UTC m=+0.142829900 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:46:53 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:46:53 localhost podman[89966]: 2025-10-14 08:46:53.640646186 +0000 UTC m=+0.171684217 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1) Oct 14 04:46:53 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:46:53 localhost podman[89964]: 2025-10-14 08:46:53.657192807 +0000 UTC m=+0.197038532 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:46:53 localhost podman[89964]: 2025-10-14 08:46:53.709204731 +0000 UTC m=+0.249050456 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1) Oct 14 04:46:53 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:46:55 localhost podman[90038]: 2025-10-14 08:46:55.546204138 +0000 UTC m=+0.087127729 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, release=1) Oct 14 04:46:55 localhost podman[90037]: 2025-10-14 08:46:55.600582925 +0000 UTC m=+0.143309853 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller) Oct 14 04:46:55 localhost podman[90037]: 2025-10-14 08:46:55.640075016 +0000 UTC m=+0.182801954 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:46:55 localhost systemd[1]: tmp-crun.yoLxfb.mount: Deactivated successfully. Oct 14 04:46:55 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:46:55 localhost podman[90039]: 2025-10-14 08:46:55.666447227 +0000 UTC m=+0.203954637 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:46:55 localhost podman[90040]: 2025-10-14 08:46:55.753354089 +0000 UTC m=+0.287403836 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step5, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:46:55 localhost podman[90040]: 2025-10-14 08:46:55.77898321 +0000 UTC m=+0.313032907 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, release=1, version=17.1.9, config_id=tripleo_step5, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 14 04:46:55 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:46:55 localhost podman[90039]: 2025-10-14 08:46:55.83346148 +0000 UTC m=+0.370968840 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 14 04:46:55 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:46:55 localhost podman[90038]: 2025-10-14 08:46:55.93307597 +0000 UTC m=+0.473999541 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4) Oct 14 04:46:55 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:47:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:47:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:47:00 localhost recover_tripleo_nova_virtqemud[90134]: 62532 Oct 14 04:47:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:47:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:47:00 localhost systemd[1]: tmp-crun.XYmjZ2.mount: Deactivated successfully. Oct 14 04:47:00 localhost podman[90132]: 2025-10-14 08:47:00.550491924 +0000 UTC m=+0.085342062 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:47:00 localhost podman[90132]: 2025-10-14 08:47:00.742224044 +0000 UTC m=+0.277074172 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 14 04:47:00 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:47:16 localhost podman[90242]: 2025-10-14 08:47:16.559219953 +0000 UTC m=+0.089733368 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, container_name=iscsid, vcs-type=git, version=17.1.9, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=) Oct 14 04:47:16 localhost podman[90242]: 2025-10-14 08:47:16.600403299 +0000 UTC m=+0.130916684 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=) Oct 14 04:47:16 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:47:16 localhost podman[90241]: 2025-10-14 08:47:16.615394977 +0000 UTC m=+0.148945472 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:47:16 localhost podman[90241]: 2025-10-14 08:47:16.650120932 +0000 UTC m=+0.183671417 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.) Oct 14 04:47:16 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:47:24 localhost systemd[1]: tmp-crun.jhlY1o.mount: Deactivated successfully. Oct 14 04:47:24 localhost podman[90281]: 2025-10-14 08:47:24.552892693 +0000 UTC m=+0.088978738 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:47:24 localhost podman[90281]: 2025-10-14 08:47:24.566161225 +0000 UTC m=+0.102247230 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, container_name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_id=tripleo_step4) Oct 14 04:47:24 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:47:24 localhost podman[90280]: 2025-10-14 08:47:24.6535286 +0000 UTC m=+0.192370498 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:47:24 localhost podman[90280]: 2025-10-14 08:47:24.715313334 +0000 UTC m=+0.254155232 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Oct 14 04:47:24 localhost podman[90282]: 2025-10-14 08:47:24.723273575 +0000 UTC m=+0.253515035 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 04:47:24 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:47:24 localhost podman[90282]: 2025-10-14 08:47:24.775962866 +0000 UTC m=+0.306204266 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 04:47:24 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:47:25 localhost systemd[1]: tmp-crun.EqIHsm.mount: Deactivated successfully. Oct 14 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:47:26 localhost podman[90355]: 2025-10-14 08:47:26.541796952 +0000 UTC m=+0.083492002 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller) Oct 14 04:47:26 localhost podman[90358]: 2025-10-14 08:47:26.59774774 +0000 UTC m=+0.130839142 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.9, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=) Oct 14 04:47:26 localhost podman[90358]: 2025-10-14 08:47:26.635116854 +0000 UTC m=+0.168208216 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37) Oct 14 04:47:26 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:47:26 localhost podman[90356]: 2025-10-14 08:47:26.658230909 +0000 UTC m=+0.195591484 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:48:37, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:47:26 localhost podman[90357]: 2025-10-14 08:47:26.711969658 +0000 UTC m=+0.245966843 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:47:26 localhost podman[90355]: 2025-10-14 08:47:26.726646519 +0000 UTC m=+0.268341569 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 14 04:47:26 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:47:26 localhost podman[90357]: 2025-10-14 08:47:26.78681837 +0000 UTC m=+0.320815595 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, release=1) Oct 14 04:47:26 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:47:27 localhost podman[90356]: 2025-10-14 08:47:27.029191118 +0000 UTC m=+0.566551703 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.9) Oct 14 04:47:27 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:47:27 localhost systemd[1]: tmp-crun.6xR1Ub.mount: Deactivated successfully. Oct 14 04:47:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:47:31 localhost podman[90453]: 2025-10-14 08:47:31.550659509 +0000 UTC m=+0.082045363 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, release=1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:47:31 localhost podman[90453]: 2025-10-14 08:47:31.744075905 +0000 UTC m=+0.275461769 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, container_name=metrics_qdr) Oct 14 04:47:31 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:47:47 localhost podman[90503]: 2025-10-14 08:47:47.52974688 +0000 UTC m=+0.071132234 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:47:47 localhost podman[90503]: 2025-10-14 08:47:47.541216565 +0000 UTC m=+0.082601909 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 14 04:47:47 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:47:47 localhost systemd[1]: tmp-crun.mPgnCK.mount: Deactivated successfully. Oct 14 04:47:47 localhost podman[90504]: 2025-10-14 08:47:47.592880739 +0000 UTC m=+0.132112585 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container) Oct 14 04:47:47 localhost podman[90504]: 2025-10-14 08:47:47.632050671 +0000 UTC m=+0.171282477 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:27:15, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 14 04:47:47 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:47:55 localhost podman[90540]: 2025-10-14 08:47:55.552246347 +0000 UTC m=+0.091537955 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 14 04:47:55 localhost podman[90540]: 2025-10-14 08:47:55.584089355 +0000 UTC m=+0.123380923 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:47:55 localhost podman[90541]: 2025-10-14 08:47:55.598840927 +0000 UTC m=+0.134609792 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:47:55 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:47:55 localhost systemd[1]: tmp-crun.cmxjAd.mount: Deactivated successfully. Oct 14 04:47:55 localhost podman[90542]: 2025-10-14 08:47:55.66323763 +0000 UTC m=+0.197534536 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:47:55 localhost podman[90541]: 2025-10-14 08:47:55.683347815 +0000 UTC m=+0.219116750 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 04:47:55 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:47:55 localhost podman[90542]: 2025-10-14 08:47:55.721263454 +0000 UTC m=+0.255560370 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 04:47:55 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:47:57 localhost systemd[1]: tmp-crun.h3imu1.mount: Deactivated successfully. Oct 14 04:47:57 localhost podman[90612]: 2025-10-14 08:47:57.569665246 +0000 UTC m=+0.108186639 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:47:57 localhost systemd[1]: tmp-crun.wIiKIJ.mount: Deactivated successfully. Oct 14 04:47:57 localhost podman[90618]: 2025-10-14 08:47:57.624028282 +0000 UTC m=+0.150894155 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:47:57 localhost podman[90612]: 2025-10-14 08:47:57.650371042 +0000 UTC m=+0.188892445 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible) Oct 14 04:47:57 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:47:57 localhost podman[90613]: 2025-10-14 08:47:57.668442444 +0000 UTC m=+0.203334031 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute) Oct 14 04:47:57 localhost podman[90618]: 2025-10-14 08:47:57.676513719 +0000 UTC m=+0.203379602 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:47:57 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:47:57 localhost podman[90614]: 2025-10-14 08:47:57.718611748 +0000 UTC m=+0.249457567 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:47:57 localhost podman[90614]: 2025-10-14 08:47:57.79310355 +0000 UTC m=+0.323949329 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git) Oct 14 04:47:57 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:47:58 localhost podman[90613]: 2025-10-14 08:47:58.035302183 +0000 UTC m=+0.570193830 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible) Oct 14 04:47:58 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:48:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:48:02 localhost systemd[1]: tmp-crun.ZsPCtN.mount: Deactivated successfully. Oct 14 04:48:02 localhost podman[90705]: 2025-10-14 08:48:02.556146998 +0000 UTC m=+0.095439911 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 14 04:48:02 localhost podman[90705]: 2025-10-14 08:48:02.743411139 +0000 UTC m=+0.282704062 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, distribution-scope=public, release=1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true) Oct 14 04:48:02 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:48:18 localhost podman[90810]: 2025-10-14 08:48:18.534823886 +0000 UTC m=+0.075496129 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 14 04:48:18 localhost podman[90810]: 2025-10-14 08:48:18.549046994 +0000 UTC m=+0.089719247 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 14 04:48:18 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:48:18 localhost podman[90809]: 2025-10-14 08:48:18.642140472 +0000 UTC m=+0.181221693 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, container_name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 14 04:48:18 localhost podman[90809]: 2025-10-14 08:48:18.681091338 +0000 UTC m=+0.220172539 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 14 04:48:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:48:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:48:22 localhost recover_tripleo_nova_virtqemud[90850]: 62532 Oct 14 04:48:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:48:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:48:26 localhost podman[90852]: 2025-10-14 08:48:26.553578604 +0000 UTC m=+0.089505092 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Oct 14 04:48:26 localhost podman[90851]: 2025-10-14 08:48:26.603107592 +0000 UTC m=+0.142540413 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:48:26 localhost podman[90852]: 2025-10-14 08:48:26.617797372 +0000 UTC m=+0.153723810 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, container_name=logrotate_crond, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, architecture=x86_64) Oct 14 04:48:26 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:48:26 localhost podman[90851]: 2025-10-14 08:48:26.653665267 +0000 UTC m=+0.193098078 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, version=17.1.9) Oct 14 04:48:26 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:48:26 localhost podman[90853]: 2025-10-14 08:48:26.668181243 +0000 UTC m=+0.198859851 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 04:48:26 localhost podman[90853]: 2025-10-14 08:48:26.700092041 +0000 UTC m=+0.230770649 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, tcib_managed=true) Oct 14 04:48:26 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:48:28 localhost podman[90928]: 2025-10-14 08:48:28.544783404 +0000 UTC m=+0.073111565 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Oct 14 04:48:28 localhost systemd[1]: tmp-crun.jgjBa4.mount: Deactivated successfully. Oct 14 04:48:28 localhost podman[90926]: 2025-10-14 08:48:28.609741582 +0000 UTC m=+0.147671579 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, version=17.1.9) Oct 14 04:48:28 localhost podman[90928]: 2025-10-14 08:48:28.612909297 +0000 UTC m=+0.141237468 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53) Oct 14 04:48:28 localhost podman[90929]: 2025-10-14 08:48:28.668012143 +0000 UTC m=+0.194494406 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:48:28 localhost podman[90926]: 2025-10-14 08:48:28.717622042 +0000 UTC m=+0.255552109 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, release=1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:48:28 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:48:28 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:48:28 localhost podman[90927]: 2025-10-14 08:48:28.719406569 +0000 UTC m=+0.252698093 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, release=1, io.buildah.version=1.33.12, container_name=nova_migration_target, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vcs-type=git) Oct 14 04:48:28 localhost podman[90929]: 2025-10-14 08:48:28.87353567 +0000 UTC m=+0.400017873 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute) Oct 14 04:48:28 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:48:29 localhost podman[90927]: 2025-10-14 08:48:29.120240523 +0000 UTC m=+0.653532057 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 14 04:48:29 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:48:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:48:33 localhost podman[91019]: 2025-10-14 08:48:33.541998772 +0000 UTC m=+0.079218648 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:48:33 localhost podman[91019]: 2025-10-14 08:48:33.732799738 +0000 UTC m=+0.270019624 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-type=git, version=17.1.9, container_name=metrics_qdr, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:48:33 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:48:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:48:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:48:49 localhost podman[91048]: 2025-10-14 08:48:49.549633403 +0000 UTC m=+0.090074839 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.9) Oct 14 04:48:49 localhost systemd[1]: tmp-crun.ZrAUC0.mount: Deactivated successfully. Oct 14 04:48:49 localhost podman[91049]: 2025-10-14 08:48:49.59991653 +0000 UTC m=+0.138432164 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.9, container_name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15) Oct 14 04:48:49 localhost podman[91049]: 2025-10-14 08:48:49.612040392 +0000 UTC m=+0.150556026 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:48:49 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:48:49 localhost podman[91048]: 2025-10-14 08:48:49.665494944 +0000 UTC m=+0.205936390 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9) Oct 14 04:48:49 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:48:57 localhost systemd[1]: tmp-crun.s3rRbG.mount: Deactivated successfully. Oct 14 04:48:57 localhost podman[91087]: 2025-10-14 08:48:57.53795944 +0000 UTC m=+0.079622339 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, release=1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:48:57 localhost podman[91086]: 2025-10-14 08:48:57.55561584 +0000 UTC m=+0.095974165 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-type=git, batch=17.1_20250721.1, release=1, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron) Oct 14 04:48:57 localhost podman[91086]: 2025-10-14 08:48:57.59321214 +0000 UTC m=+0.133570525 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, release=1, tcib_managed=true, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:48:57 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:48:57 localhost podman[91085]: 2025-10-14 08:48:57.642363627 +0000 UTC m=+0.185237229 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, vcs-type=git) Oct 14 04:48:57 localhost podman[91087]: 2025-10-14 08:48:57.694308749 +0000 UTC m=+0.235971618 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1) Oct 14 04:48:57 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:48:57 localhost podman[91085]: 2025-10-14 08:48:57.750421272 +0000 UTC m=+0.293294864 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4) Oct 14 04:48:57 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:48:59 localhost systemd[1]: tmp-crun.1QXhf2.mount: Deactivated successfully. Oct 14 04:48:59 localhost podman[91153]: 2025-10-14 08:48:59.589836395 +0000 UTC m=+0.125569841 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:48:59 localhost podman[91156]: 2025-10-14 08:48:59.561624314 +0000 UTC m=+0.086228874 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, container_name=nova_compute, distribution-scope=public, config_id=tripleo_step5, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute) Oct 14 04:48:59 localhost podman[91156]: 2025-10-14 08:48:59.645105844 +0000 UTC m=+0.169710424 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, release=1, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team) Oct 14 04:48:59 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:48:59 localhost podman[91155]: 2025-10-14 08:48:59.665279682 +0000 UTC m=+0.195313826 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 14 04:48:59 localhost podman[91153]: 2025-10-14 08:48:59.665690273 +0000 UTC m=+0.201423689 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller) Oct 14 04:48:59 localhost podman[91154]: 2025-10-14 08:48:59.54042739 +0000 UTC m=+0.076267748 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, release=1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:48:59 localhost podman[91155]: 2025-10-14 08:48:59.700102168 +0000 UTC m=+0.230136282 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, tcib_managed=true) Oct 14 04:48:59 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:48:59 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:48:59 localhost podman[91154]: 2025-10-14 08:48:59.914476391 +0000 UTC m=+0.450316779 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64) Oct 14 04:48:59 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:49:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:49:04 localhost podman[91247]: 2025-10-14 08:49:04.545956239 +0000 UTC m=+0.085438025 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, version=17.1.9, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:49:04 localhost podman[91247]: 2025-10-14 08:49:04.774387315 +0000 UTC m=+0.313869081 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:49:04 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:49:05 localhost sshd[91276]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:49:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:49:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:49:20 localhost podman[91357]: 2025-10-14 08:49:20.550181298 +0000 UTC m=+0.084521809 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 04:49:20 localhost podman[91357]: 2025-10-14 08:49:20.592210966 +0000 UTC m=+0.126551467 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:49:20 localhost podman[91356]: 2025-10-14 08:49:20.599341365 +0000 UTC m=+0.134216791 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:04:03, architecture=x86_64, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Oct 14 04:49:20 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:49:20 localhost podman[91356]: 2025-10-14 08:49:20.614036517 +0000 UTC m=+0.148911983 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:49:20 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:49:28 localhost systemd[1]: tmp-crun.ujLeN5.mount: Deactivated successfully. Oct 14 04:49:28 localhost podman[91396]: 2025-10-14 08:49:28.560886241 +0000 UTC m=+0.090985431 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.openshift.expose-services=, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi) Oct 14 04:49:28 localhost systemd[1]: tmp-crun.nPUPGF.mount: Deactivated successfully. Oct 14 04:49:28 localhost podman[91394]: 2025-10-14 08:49:28.582904697 +0000 UTC m=+0.121017440 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-07-21T14:45:33, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ceilometer_agent_compute) Oct 14 04:49:28 localhost podman[91396]: 2025-10-14 08:49:28.588364472 +0000 UTC m=+0.118463672 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 04:49:28 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:49:28 localhost podman[91394]: 2025-10-14 08:49:28.611052756 +0000 UTC m=+0.149165489 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, release=1, batch=17.1_20250721.1, vcs-type=git) Oct 14 04:49:28 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:49:28 localhost podman[91395]: 2025-10-14 08:49:28.673750484 +0000 UTC m=+0.207997915 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:49:28 localhost podman[91395]: 2025-10-14 08:49:28.711254702 +0000 UTC m=+0.245502093 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 14 04:49:28 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:49:30 localhost podman[91465]: 2025-10-14 08:49:30.546844622 +0000 UTC m=+0.087222091 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12) Oct 14 04:49:30 localhost systemd[1]: tmp-crun.DLAgMb.mount: Deactivated successfully. Oct 14 04:49:30 localhost podman[91467]: 2025-10-14 08:49:30.60578584 +0000 UTC m=+0.142686807 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1) Oct 14 04:49:30 localhost podman[91467]: 2025-10-14 08:49:30.660832655 +0000 UTC m=+0.197733622 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, version=17.1.9, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 14 04:49:30 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:49:30 localhost podman[91466]: 2025-10-14 08:49:30.712627172 +0000 UTC m=+0.251546192 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:49:30 localhost podman[91465]: 2025-10-14 08:49:30.724705234 +0000 UTC m=+0.265082693 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.9, release=1, io.buildah.version=1.33.12, container_name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:49:30 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:49:30 localhost podman[91468]: 2025-10-14 08:49:30.663943197 +0000 UTC m=+0.197523325 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:49:30 localhost podman[91468]: 2025-10-14 08:49:30.794360417 +0000 UTC m=+0.327940555 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:49:30 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:49:31 localhost podman[91466]: 2025-10-14 08:49:31.104255991 +0000 UTC m=+0.643175001 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true) Oct 14 04:49:31 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:49:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:49:35 localhost podman[91560]: 2025-10-14 08:49:35.54556871 +0000 UTC m=+0.085374073 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, release=1, container_name=metrics_qdr, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 04:49:35 localhost podman[91560]: 2025-10-14 08:49:35.80721778 +0000 UTC m=+0.347023113 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:49:35 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:49:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:49:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:49:51 localhost podman[91589]: 2025-10-14 08:49:51.538045207 +0000 UTC m=+0.078657313 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 04:49:51 localhost podman[91589]: 2025-10-14 08:49:51.575054732 +0000 UTC m=+0.115666848 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, release=2, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:49:51 localhost systemd[1]: tmp-crun.bjq3wn.mount: Deactivated successfully. Oct 14 04:49:51 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:49:51 localhost podman[91590]: 2025-10-14 08:49:51.600299114 +0000 UTC m=+0.137939371 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid) Oct 14 04:49:51 localhost podman[91590]: 2025-10-14 08:49:51.637368339 +0000 UTC m=+0.175008646 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, version=17.1.9, distribution-scope=public, release=1) Oct 14 04:49:51 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:49:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:49:59 localhost recover_tripleo_nova_virtqemud[91643]: 62532 Oct 14 04:49:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:49:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:49:59 localhost podman[91629]: 2025-10-14 08:49:59.558080368 +0000 UTC m=+0.094003501 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, release=1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:49:59 localhost podman[91631]: 2025-10-14 08:49:59.594587079 +0000 UTC m=+0.123001893 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, release=1, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 14 04:49:59 localhost podman[91629]: 2025-10-14 08:49:59.614077087 +0000 UTC m=+0.150000310 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9) Oct 14 04:49:59 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:49:59 localhost systemd[1]: tmp-crun.PdWvn2.mount: Deactivated successfully. Oct 14 04:49:59 localhost podman[91630]: 2025-10-14 08:49:59.665713272 +0000 UTC m=+0.198588404 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, release=1, vcs-type=git, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-cron) Oct 14 04:49:59 localhost podman[91631]: 2025-10-14 08:49:59.671091085 +0000 UTC m=+0.199505949 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 04:49:59 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:49:59 localhost podman[91630]: 2025-10-14 08:49:59.704185114 +0000 UTC m=+0.237060256 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.buildah.version=1.33.12) Oct 14 04:49:59 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:50:01 localhost podman[91703]: 2025-10-14 08:50:01.550073979 +0000 UTC m=+0.086527652 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_migration_target, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, version=17.1.9) Oct 14 04:50:01 localhost podman[91710]: 2025-10-14 08:50:01.606522242 +0000 UTC m=+0.135550058 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute) Oct 14 04:50:01 localhost podman[91702]: 2025-10-14 08:50:01.661085143 +0000 UTC m=+0.201024739 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 04:50:01 localhost podman[91704]: 2025-10-14 08:50:01.713070196 +0000 UTC m=+0.246418436 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64) Oct 14 04:50:01 localhost podman[91702]: 2025-10-14 08:50:01.719211249 +0000 UTC m=+0.259150765 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc.) Oct 14 04:50:01 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:50:01 localhost podman[91710]: 2025-10-14 08:50:01.73314933 +0000 UTC m=+0.262177066 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 14 04:50:01 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:50:01 localhost podman[91704]: 2025-10-14 08:50:01.746991019 +0000 UTC m=+0.280339219 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 14 04:50:01 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:50:01 localhost podman[91703]: 2025-10-14 08:50:01.885940904 +0000 UTC m=+0.422394567 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:50:01 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:50:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:50:06 localhost podman[91800]: 2025-10-14 08:50:06.546156728 +0000 UTC m=+0.085168857 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 04:50:06 localhost podman[91800]: 2025-10-14 08:50:06.758020963 +0000 UTC m=+0.297033052 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:50:06 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:50:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:50:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:50:22 localhost podman[91905]: 2025-10-14 08:50:22.549313797 +0000 UTC m=+0.090325273 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 04:50:22 localhost systemd[1]: tmp-crun.wyRFqj.mount: Deactivated successfully. Oct 14 04:50:22 localhost podman[91906]: 2025-10-14 08:50:22.601574998 +0000 UTC m=+0.138985839 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:50:22 localhost podman[91906]: 2025-10-14 08:50:22.613142575 +0000 UTC m=+0.150553456 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, container_name=iscsid) Oct 14 04:50:22 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:50:22 localhost podman[91905]: 2025-10-14 08:50:22.669884634 +0000 UTC m=+0.210896120 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9) Oct 14 04:50:22 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:50:30 localhost podman[91943]: 2025-10-14 08:50:30.558040898 +0000 UTC m=+0.092457890 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Oct 14 04:50:30 localhost podman[91943]: 2025-10-14 08:50:30.56376546 +0000 UTC m=+0.098182382 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, com.redhat.component=openstack-cron-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team) Oct 14 04:50:30 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:50:30 localhost podman[91942]: 2025-10-14 08:50:30.610934353 +0000 UTC m=+0.148880219 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute) Oct 14 04:50:30 localhost podman[91944]: 2025-10-14 08:50:30.65378498 +0000 UTC m=+0.181528974 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4) Oct 14 04:50:30 localhost podman[91942]: 2025-10-14 08:50:30.65907518 +0000 UTC m=+0.197021026 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 04:50:30 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:50:30 localhost podman[91944]: 2025-10-14 08:50:30.684270598 +0000 UTC m=+0.212014602 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-type=git, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64) Oct 14 04:50:30 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:50:32 localhost podman[92011]: 2025-10-14 08:50:32.550620199 +0000 UTC m=+0.083534347 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9) Oct 14 04:50:32 localhost podman[92013]: 2025-10-14 08:50:32.60419665 +0000 UTC m=+0.132581899 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:50:32 localhost podman[92013]: 2025-10-14 08:50:32.66112966 +0000 UTC m=+0.189514919 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, batch=17.1_20250721.1) Oct 14 04:50:32 localhost systemd[1]: tmp-crun.7A7xgF.mount: Deactivated successfully. Oct 14 04:50:32 localhost podman[92010]: 2025-10-14 08:50:32.679417755 +0000 UTC m=+0.216101204 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:28:44, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git) Oct 14 04:50:32 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:50:32 localhost podman[92012]: 2025-10-14 08:50:32.721704497 +0000 UTC m=+0.252007466 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, release=1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:50:32 localhost podman[92010]: 2025-10-14 08:50:32.760227718 +0000 UTC m=+0.296911167 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:50:32 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:50:32 localhost podman[92012]: 2025-10-14 08:50:32.792388732 +0000 UTC m=+0.322691701 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git) Oct 14 04:50:32 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:50:32 localhost podman[92011]: 2025-10-14 08:50:32.908220154 +0000 UTC m=+0.441134352 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:50:32 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:50:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:50:37 localhost podman[92104]: 2025-10-14 08:50:37.52664493 +0000 UTC m=+0.071485207 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 14 04:50:37 localhost podman[92104]: 2025-10-14 08:50:37.756244901 +0000 UTC m=+0.301085208 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:50:37 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:50:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:50:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:50:53 localhost podman[92134]: 2025-10-14 08:50:53.542867032 +0000 UTC m=+0.079536771 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:27:15, vcs-type=git, release=1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:50:53 localhost podman[92134]: 2025-10-14 08:50:53.557079689 +0000 UTC m=+0.093749438 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, container_name=iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:50:53 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:50:53 localhost podman[92133]: 2025-10-14 08:50:53.651924545 +0000 UTC m=+0.189061646 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=2, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, container_name=collectd) Oct 14 04:50:53 localhost podman[92133]: 2025-10-14 08:50:53.686710157 +0000 UTC m=+0.223847268 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=) Oct 14 04:50:53 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:51:01 localhost podman[92170]: 2025-10-14 08:51:01.552761973 +0000 UTC m=+0.089155115 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9) Oct 14 04:51:01 localhost podman[92170]: 2025-10-14 08:51:01.588249485 +0000 UTC m=+0.124642627 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:51:01 localhost systemd[1]: tmp-crun.1nH3rD.mount: Deactivated successfully. Oct 14 04:51:01 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:51:01 localhost podman[92172]: 2025-10-14 08:51:01.61783772 +0000 UTC m=+0.147000001 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:51:01 localhost podman[92171]: 2025-10-14 08:51:01.661796025 +0000 UTC m=+0.190873444 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, release=1, architecture=x86_64) Oct 14 04:51:01 localhost podman[92171]: 2025-10-14 08:51:01.669207512 +0000 UTC m=+0.198284961 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-type=git, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12, release=1, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:51:01 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:51:01 localhost podman[92172]: 2025-10-14 08:51:01.727602321 +0000 UTC m=+0.256764612 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:51:01 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:51:03 localhost systemd[1]: tmp-crun.9WC7o5.mount: Deactivated successfully. Oct 14 04:51:03 localhost systemd[1]: tmp-crun.8hP36I.mount: Deactivated successfully. Oct 14 04:51:03 localhost podman[92243]: 2025-10-14 08:51:03.616814867 +0000 UTC m=+0.153984576 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:51:03 localhost podman[92245]: 2025-10-14 08:51:03.583635556 +0000 UTC m=+0.114472097 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, distribution-scope=public) Oct 14 04:51:03 localhost podman[92244]: 2025-10-14 08:51:03.650575872 +0000 UTC m=+0.188233344 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9) Oct 14 04:51:03 localhost podman[92245]: 2025-10-14 08:51:03.663488605 +0000 UTC m=+0.194325106 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=) Oct 14 04:51:03 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:51:03 localhost podman[92243]: 2025-10-14 08:51:03.678524104 +0000 UTC m=+0.215693843 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:51:03 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:51:03 localhost podman[92246]: 2025-10-14 08:51:03.766477647 +0000 UTC m=+0.293182228 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, release=1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 14 04:51:03 localhost podman[92246]: 2025-10-14 08:51:03.799142593 +0000 UTC m=+0.325847154 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:51:03 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:51:04 localhost podman[92244]: 2025-10-14 08:51:04.012060341 +0000 UTC m=+0.549717773 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:51:04 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:51:08 localhost podman[92341]: 2025-10-14 08:51:08.546640371 +0000 UTC m=+0.086817774 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git) Oct 14 04:51:08 localhost podman[92341]: 2025-10-14 08:51:08.739286182 +0000 UTC m=+0.279463645 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, release=1, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container) Oct 14 04:51:08 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:51:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:51:22 localhost recover_tripleo_nova_virtqemud[92433]: 62532 Oct 14 04:51:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:51:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:51:24 localhost podman[92449]: 2025-10-14 08:51:24.533102892 +0000 UTC m=+0.071730124 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, distribution-scope=public, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:51:24 localhost podman[92449]: 2025-10-14 08:51:24.546366823 +0000 UTC m=+0.084994025 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., tcib_managed=true, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, release=2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:51:24 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:51:24 localhost podman[92450]: 2025-10-14 08:51:24.60996055 +0000 UTC m=+0.147906345 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.9) Oct 14 04:51:24 localhost podman[92450]: 2025-10-14 08:51:24.6174661 +0000 UTC m=+0.155411935 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 04:51:24 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:51:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:51:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:51:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:51:32 localhost systemd[1]: tmp-crun.BCLnnR.mount: Deactivated successfully. Oct 14 04:51:32 localhost podman[92488]: 2025-10-14 08:51:32.560418144 +0000 UTC m=+0.101244036 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 04:51:32 localhost podman[92488]: 2025-10-14 08:51:32.595344761 +0000 UTC m=+0.136170703 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute) Oct 14 04:51:32 localhost systemd[1]: tmp-crun.5JuVif.mount: Deactivated successfully. Oct 14 04:51:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:51:32 localhost podman[92489]: 2025-10-14 08:51:32.614438207 +0000 UTC m=+0.150954505 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 14 04:51:32 localhost podman[92489]: 2025-10-14 08:51:32.627159345 +0000 UTC m=+0.163675683 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron) Oct 14 04:51:32 localhost podman[92490]: 2025-10-14 08:51:32.665104141 +0000 UTC m=+0.194701545 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi) Oct 14 04:51:32 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:51:32 localhost podman[92490]: 2025-10-14 08:51:32.727124147 +0000 UTC m=+0.256721581 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Oct 14 04:51:32 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:51:34 localhost systemd[1]: tmp-crun.c8MyMg.mount: Deactivated successfully. Oct 14 04:51:34 localhost podman[92560]: 2025-10-14 08:51:34.557483442 +0000 UTC m=+0.088228792 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:51:34 localhost podman[92559]: 2025-10-14 08:51:34.612838421 +0000 UTC m=+0.146160468 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:51:34 localhost podman[92561]: 2025-10-14 08:51:34.661317086 +0000 UTC m=+0.187314040 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Oct 14 04:51:34 localhost podman[92559]: 2025-10-14 08:51:34.669107113 +0000 UTC m=+0.202429130 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 04:51:34 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Deactivated successfully. Oct 14 04:51:34 localhost podman[92562]: 2025-10-14 08:51:34.753064161 +0000 UTC m=+0.276889787 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, container_name=nova_compute, release=1, vcs-type=git) Oct 14 04:51:34 localhost podman[92561]: 2025-10-14 08:51:34.78850418 +0000 UTC m=+0.314501114 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 14 04:51:34 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:51:34 localhost podman[92562]: 2025-10-14 08:51:34.808654485 +0000 UTC m=+0.332480031 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 14 04:51:34 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:51:34 localhost podman[92560]: 2025-10-14 08:51:34.954495284 +0000 UTC m=+0.485240634 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_migration_target, vcs-type=git, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute) Oct 14 04:51:34 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:51:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:51:39 localhost podman[92654]: 2025-10-14 08:51:39.546866197 +0000 UTC m=+0.087184624 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 14 04:51:39 localhost podman[92654]: 2025-10-14 08:51:39.766426631 +0000 UTC m=+0.306745028 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, release=1, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9) Oct 14 04:51:39 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:51:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:51:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:51:55 localhost podman[92684]: 2025-10-14 08:51:55.554914489 +0000 UTC m=+0.089734981 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 04:51:55 localhost podman[92684]: 2025-10-14 08:51:55.588397288 +0000 UTC m=+0.123217750 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container) Oct 14 04:51:55 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:51:55 localhost podman[92683]: 2025-10-14 08:51:55.605686956 +0000 UTC m=+0.144536374 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=2, name=rhosp17/openstack-collectd, tcib_managed=true, distribution-scope=public, version=17.1.9) Oct 14 04:51:55 localhost podman[92683]: 2025-10-14 08:51:55.611880501 +0000 UTC m=+0.150729899 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=2, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:51:55 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:52:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:52:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:52:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:52:03 localhost podman[92724]: 2025-10-14 08:52:03.543073726 +0000 UTC m=+0.082501170 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:52:03 localhost podman[92725]: 2025-10-14 08:52:03.60954438 +0000 UTC m=+0.142328547 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:52:03 localhost podman[92725]: 2025-10-14 08:52:03.616422342 +0000 UTC m=+0.149206519 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 04:52:03 localhost podman[92724]: 2025-10-14 08:52:03.625449011 +0000 UTC m=+0.164876445 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true) Oct 14 04:52:03 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:52:03 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:52:03 localhost podman[92726]: 2025-10-14 08:52:03.711371411 +0000 UTC m=+0.243168032 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 04:52:03 localhost podman[92726]: 2025-10-14 08:52:03.741091668 +0000 UTC m=+0.272888299 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 04:52:03 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:52:05 localhost systemd[1]: tmp-crun.znAQbj.mount: Deactivated successfully. Oct 14 04:52:05 localhost podman[92793]: 2025-10-14 08:52:05.55507327 +0000 UTC m=+0.094021666 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:52:05 localhost podman[92796]: 2025-10-14 08:52:05.609919585 +0000 UTC m=+0.138061544 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:52:05 localhost podman[92793]: 2025-10-14 08:52:05.634806304 +0000 UTC m=+0.173754720 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=) Oct 14 04:52:05 localhost podman[92795]: 2025-10-14 08:52:05.65347936 +0000 UTC m=+0.183822357 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 14 04:52:05 localhost podman[92796]: 2025-10-14 08:52:05.659586152 +0000 UTC m=+0.187728091 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git) Oct 14 04:52:05 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:52:05 localhost podman[92794]: 2025-10-14 08:52:05.577897865 +0000 UTC m=+0.107978995 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:52:05 localhost podman[92793]: unhealthy Oct 14 04:52:05 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:52:05 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:52:05 localhost podman[92795]: 2025-10-14 08:52:05.724151264 +0000 UTC m=+0.254494291 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, architecture=x86_64, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public) Oct 14 04:52:05 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:52:05 localhost podman[92794]: 2025-10-14 08:52:05.940484824 +0000 UTC m=+0.470565914 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:52:05 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:52:10 localhost podman[92889]: 2025-10-14 08:52:10.554924213 +0000 UTC m=+0.087649896 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.9, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, container_name=metrics_qdr, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:52:10 localhost podman[92889]: 2025-10-14 08:52:10.742390356 +0000 UTC m=+0.275116069 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true) Oct 14 04:52:10 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:52:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:52:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:52:26 localhost podman[93030]: 2025-10-14 08:52:26.551989994 +0000 UTC m=+0.095426823 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, vendor=Red Hat, Inc., container_name=collectd) Oct 14 04:52:26 localhost podman[93030]: 2025-10-14 08:52:26.563032017 +0000 UTC m=+0.106468846 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03) Oct 14 04:52:26 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:52:26 localhost podman[93031]: 2025-10-14 08:52:26.650353203 +0000 UTC m=+0.191668455 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid) Oct 14 04:52:26 localhost podman[93031]: 2025-10-14 08:52:26.685030083 +0000 UTC m=+0.226345345 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:52:26 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:52:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:52:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:52:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:52:34 localhost podman[93085]: 2025-10-14 08:52:34.538829122 +0000 UTC m=+0.070483600 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:52:34 localhost podman[93085]: 2025-10-14 08:52:34.573992096 +0000 UTC m=+0.105646624 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:52:34 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:52:34 localhost podman[93086]: 2025-10-14 08:52:34.651369488 +0000 UTC m=+0.184056643 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git) Oct 14 04:52:34 localhost podman[93086]: 2025-10-14 08:52:34.687096886 +0000 UTC m=+0.219784041 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9) Oct 14 04:52:34 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:52:34 localhost podman[93084]: 2025-10-14 08:52:34.708777401 +0000 UTC m=+0.243215073 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 14 04:52:34 localhost podman[93084]: 2025-10-14 08:52:34.764922541 +0000 UTC m=+0.299360163 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Oct 14 04:52:34 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:52:36 localhost podman[93157]: 2025-10-14 08:52:36.551108735 +0000 UTC m=+0.088919040 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:52:36 localhost podman[93158]: 2025-10-14 08:52:36.593358095 +0000 UTC m=+0.127978756 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true) Oct 14 04:52:36 localhost podman[93157]: 2025-10-14 08:52:36.626954056 +0000 UTC m=+0.164764421 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 14 04:52:36 localhost podman[93157]: unhealthy Oct 14 04:52:36 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:52:36 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:52:36 localhost podman[93160]: 2025-10-14 08:52:36.717245881 +0000 UTC m=+0.246239704 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Oct 14 04:52:36 localhost podman[93159]: 2025-10-14 08:52:36.768932562 +0000 UTC m=+0.298509499 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, release=1) Oct 14 04:52:36 localhost podman[93160]: 2025-10-14 08:52:36.793577266 +0000 UTC m=+0.322571148 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_id=tripleo_step5, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:52:36 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:52:36 localhost podman[93159]: 2025-10-14 08:52:36.841404665 +0000 UTC m=+0.370981582 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T16:28:53, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git) Oct 14 04:52:36 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Deactivated successfully. Oct 14 04:52:36 localhost podman[93158]: 2025-10-14 08:52:36.993301034 +0000 UTC m=+0.527921725 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target) Oct 14 04:52:37 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:52:37 localhost systemd[1]: tmp-crun.B9tdv7.mount: Deactivated successfully. Oct 14 04:52:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:52:37 localhost recover_tripleo_nova_virtqemud[93257]: 62532 Oct 14 04:52:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:52:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:52:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5650 writes, 25K keys, 5650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5650 writes, 704 syncs, 8.03 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 9 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:52:41 localhost podman[93258]: 2025-10-14 08:52:41.556798611 +0000 UTC m=+0.095561046 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, release=1, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Oct 14 04:52:41 localhost podman[93258]: 2025-10-14 08:52:41.750968612 +0000 UTC m=+0.289731087 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1, config_id=tripleo_step1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.33.12, distribution-scope=public) Oct 14 04:52:41 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 04:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4831 writes, 21K keys, 4831 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4831 writes, 655 syncs, 7.38 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 22 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 04:52:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:52:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:52:57 localhost podman[93286]: 2025-10-14 08:52:57.550530682 +0000 UTC m=+0.089150516 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:52:57 localhost systemd[1]: tmp-crun.ABY1We.mount: Deactivated successfully. Oct 14 04:52:57 localhost podman[93287]: 2025-10-14 08:52:57.601871104 +0000 UTC m=+0.136387600 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, container_name=iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:52:57 localhost podman[93286]: 2025-10-14 08:52:57.618153236 +0000 UTC m=+0.156773080 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64) Oct 14 04:52:57 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:52:57 localhost podman[93287]: 2025-10-14 08:52:57.638115295 +0000 UTC m=+0.172631831 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1) Oct 14 04:52:57 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:53:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:53:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:53:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:53:05 localhost podman[93326]: 2025-10-14 08:53:05.567171822 +0000 UTC m=+0.082630663 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=) Oct 14 04:53:05 localhost podman[93325]: 2025-10-14 08:53:05.547456579 +0000 UTC m=+0.069716581 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, distribution-scope=public) Oct 14 04:53:05 localhost podman[93326]: 2025-10-14 08:53:05.604253136 +0000 UTC m=+0.119711987 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:53:05 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:53:05 localhost podman[93325]: 2025-10-14 08:53:05.629119935 +0000 UTC m=+0.151379957 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, version=17.1.9, config_id=tripleo_step4) Oct 14 04:53:05 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:53:05 localhost podman[93327]: 2025-10-14 08:53:05.677948811 +0000 UTC m=+0.190294619 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:53:05 localhost podman[93327]: 2025-10-14 08:53:05.731920812 +0000 UTC m=+0.244266590 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 04:53:05 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:53:07 localhost podman[93397]: 2025-10-14 08:53:07.606953641 +0000 UTC m=+0.146349323 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:53:07 localhost systemd[1]: tmp-crun.O5z9An.mount: Deactivated successfully. Oct 14 04:53:07 localhost podman[93399]: 2025-10-14 08:53:07.666990304 +0000 UTC m=+0.200593672 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container) Oct 14 04:53:07 localhost podman[93399]: 2025-10-14 08:53:07.693415275 +0000 UTC m=+0.227018643 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 14 04:53:07 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:53:07 localhost podman[93398]: 2025-10-14 08:53:07.802056927 +0000 UTC m=+0.339771774 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4) Oct 14 04:53:07 localhost podman[93396]: 2025-10-14 08:53:07.572865727 +0000 UTC m=+0.113523542 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-ovn-controller, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 04:53:07 localhost podman[93398]: 2025-10-14 08:53:07.84515663 +0000 UTC m=+0.382871537 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:53:07 localhost podman[93398]: unhealthy Oct 14 04:53:07 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:53:07 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:53:07 localhost podman[93396]: 2025-10-14 08:53:07.917650893 +0000 UTC m=+0.458308678 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, container_name=ovn_controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 04:53:07 localhost podman[93396]: unhealthy Oct 14 04:53:07 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:53:07 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:53:08 localhost podman[93397]: 2025-10-14 08:53:08.038214531 +0000 UTC m=+0.577610203 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9) Oct 14 04:53:08 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:53:12 localhost podman[93484]: 2025-10-14 08:53:12.530549003 +0000 UTC m=+0.077245511 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, version=17.1.9, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Oct 14 04:53:12 localhost podman[93484]: 2025-10-14 08:53:12.701233471 +0000 UTC m=+0.247929969 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:53:12 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:53:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:53:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:53:27 localhost podman[93527]: 2025-10-14 08:53:27.825768974 +0000 UTC m=+0.097898898 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:53:27 localhost podman[93527]: 2025-10-14 08:53:27.842033555 +0000 UTC m=+0.114163469 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:53:27 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:53:27 localhost podman[93526]: 2025-10-14 08:53:27.804091308 +0000 UTC m=+0.084697427 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, container_name=collectd, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:53:27 localhost podman[93526]: 2025-10-14 08:53:27.883154526 +0000 UTC m=+0.163760665 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, release=2, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, version=17.1.9, architecture=x86_64, config_id=tripleo_step3) Oct 14 04:53:27 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:53:29 localhost podman[93683]: Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.382644094 +0000 UTC m=+0.078090973 container create 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, release=553) Oct 14 04:53:29 localhost systemd[1]: Started libpod-conmon-00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538.scope. Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.352814682 +0000 UTC m=+0.048261571 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 04:53:29 localhost systemd[1]: Started libcrun container. Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.472012005 +0000 UTC m=+0.167458844 container init 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git) Oct 14 04:53:29 localhost systemd[1]: tmp-crun.TkRJCy.mount: Deactivated successfully. Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.484847645 +0000 UTC m=+0.180294474 container start 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.485783029 +0000 UTC m=+0.181229898 container attach 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Oct 14 04:53:29 localhost inspiring_chandrasekhar[93698]: 167 167 Oct 14 04:53:29 localhost systemd[1]: libpod-00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538.scope: Deactivated successfully. Oct 14 04:53:29 localhost podman[93683]: 2025-10-14 08:53:29.489436257 +0000 UTC m=+0.184883156 container died 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 04:53:29 localhost podman[93703]: 2025-10-14 08:53:29.585428753 +0000 UTC m=+0.082827339 container remove 00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_chandrasekhar, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7) Oct 14 04:53:29 localhost systemd[1]: libpod-conmon-00f4b6f53837d03fcf3c022a4e5b4baa912ee764421138013ebfa8f3a2c72538.scope: Deactivated successfully. Oct 14 04:53:29 localhost podman[93724]: Oct 14 04:53:29 localhost podman[93724]: 2025-10-14 08:53:29.804990267 +0000 UTC m=+0.077771903 container create 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, RELEASE=main) Oct 14 04:53:29 localhost systemd[1]: Started libpod-conmon-8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a.scope. Oct 14 04:53:29 localhost systemd[1]: Started libcrun container. Oct 14 04:53:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f2aae566f8db8f8869e77382ac595da2c1a5d36a3bfcc5d872fc1174a62cb/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 04:53:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f2aae566f8db8f8869e77382ac595da2c1a5d36a3bfcc5d872fc1174a62cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 04:53:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f2aae566f8db8f8869e77382ac595da2c1a5d36a3bfcc5d872fc1174a62cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 04:53:29 localhost podman[93724]: 2025-10-14 08:53:29.870421903 +0000 UTC m=+0.143203539 container init 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, RELEASE=main, distribution-scope=public, version=7) Oct 14 04:53:29 localhost podman[93724]: 2025-10-14 08:53:29.773960994 +0000 UTC m=+0.046742640 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 04:53:29 localhost podman[93724]: 2025-10-14 08:53:29.880475349 +0000 UTC m=+0.153256995 container start 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 04:53:29 localhost podman[93724]: 2025-10-14 08:53:29.88086366 +0000 UTC m=+0.153645336 container attach 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=) Oct 14 04:53:30 localhost systemd[1]: var-lib-containers-storage-overlay-597962720cc985ef95d9f0875f9870ab1375e5d6565d1059bf9ad62052c19f40-merged.mount: Deactivated successfully. Oct 14 04:53:30 localhost beautiful_wescoff[93740]: [ Oct 14 04:53:30 localhost beautiful_wescoff[93740]: { Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "available": false, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "ceph_device": false, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "lsm_data": {}, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "lvs": [], Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "path": "/dev/sr0", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "rejected_reasons": [ Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "Insufficient space (<5GB)", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "Has a FileSystem" Oct 14 04:53:30 localhost beautiful_wescoff[93740]: ], Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "sys_api": { Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "actuators": null, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "device_nodes": "sr0", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "human_readable_size": "482.00 KB", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "id_bus": "ata", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "model": "QEMU DVD-ROM", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "nr_requests": "2", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "partitions": {}, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "path": "/dev/sr0", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "removable": "1", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "rev": "2.5+", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "ro": "0", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "rotational": "1", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "sas_address": "", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "sas_device_handle": "", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "scheduler_mode": "mq-deadline", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "sectors": 0, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "sectorsize": "2048", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "size": 493568.0, Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "support_discard": "0", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "type": "disk", Oct 14 04:53:30 localhost beautiful_wescoff[93740]: "vendor": "QEMU" Oct 14 04:53:30 localhost beautiful_wescoff[93740]: } Oct 14 04:53:30 localhost beautiful_wescoff[93740]: } Oct 14 04:53:30 localhost beautiful_wescoff[93740]: ] Oct 14 04:53:30 localhost systemd[1]: libpod-8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a.scope: Deactivated successfully. Oct 14 04:53:30 localhost systemd[1]: libpod-8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a.scope: Consumed 1.014s CPU time. Oct 14 04:53:30 localhost podman[93724]: 2025-10-14 08:53:30.872268689 +0000 UTC m=+1.145050345 container died 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64) Oct 14 04:53:30 localhost systemd[1]: var-lib-containers-storage-overlay-950f2aae566f8db8f8869e77382ac595da2c1a5d36a3bfcc5d872fc1174a62cb-merged.mount: Deactivated successfully. Oct 14 04:53:30 localhost podman[95534]: 2025-10-14 08:53:30.948111641 +0000 UTC m=+0.066167976 container remove 8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_wescoff, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main) Oct 14 04:53:30 localhost systemd[1]: libpod-conmon-8b63a0b29a4691f61cecbdab02d581d45ac3cba79f7947feec42d9fef634f59a.scope: Deactivated successfully. Oct 14 04:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:53:36 localhost podman[95563]: 2025-10-14 08:53:36.564042737 +0000 UTC m=+0.095708690 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, tcib_managed=true) Oct 14 04:53:36 localhost podman[95563]: 2025-10-14 08:53:36.59617123 +0000 UTC m=+0.127837243 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, release=1, version=17.1.9, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 04:53:36 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:53:36 localhost podman[95565]: 2025-10-14 08:53:36.615023919 +0000 UTC m=+0.145511301 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:53:36 localhost podman[95564]: 2025-10-14 08:53:36.660881186 +0000 UTC m=+0.193660078 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team) Oct 14 04:53:36 localhost podman[95564]: 2025-10-14 08:53:36.674050015 +0000 UTC m=+0.206828917 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:53:36 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:53:36 localhost podman[95565]: 2025-10-14 08:53:36.724933756 +0000 UTC m=+0.255421148 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:53:36 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:53:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:53:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:53:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:53:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:53:38 localhost podman[95637]: 2025-10-14 08:53:38.550094842 +0000 UTC m=+0.088837198 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:53:38 localhost systemd[1]: tmp-crun.Sw4iuu.mount: Deactivated successfully. Oct 14 04:53:38 localhost podman[95638]: 2025-10-14 08:53:38.604887696 +0000 UTC m=+0.137624912 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 14 04:53:38 localhost podman[95638]: 2025-10-14 08:53:38.646082258 +0000 UTC m=+0.178819464 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 04:53:38 localhost podman[95638]: unhealthy Oct 14 04:53:38 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:53:38 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:53:38 localhost podman[95639]: 2025-10-14 08:53:38.711339449 +0000 UTC m=+0.242882684 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, release=1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute) Oct 14 04:53:38 localhost podman[95636]: 2025-10-14 08:53:38.679256528 +0000 UTC m=+0.219410051 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:53:38 localhost podman[95636]: 2025-10-14 08:53:38.759081336 +0000 UTC m=+0.299234809 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:53:38 localhost podman[95636]: unhealthy Oct 14 04:53:38 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:53:38 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:53:38 localhost podman[95639]: 2025-10-14 08:53:38.813237742 +0000 UTC m=+0.344780977 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1) Oct 14 04:53:38 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:53:38 localhost podman[95637]: 2025-10-14 08:53:38.969281422 +0000 UTC m=+0.508023828 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 14 04:53:38 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:53:39 localhost systemd[1]: tmp-crun.UVzODd.mount: Deactivated successfully. Oct 14 04:53:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:53:43 localhost podman[95724]: 2025-10-14 08:53:43.546287818 +0000 UTC m=+0.086826784 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12) Oct 14 04:53:43 localhost podman[95724]: 2025-10-14 08:53:43.769327004 +0000 UTC m=+0.309866020 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, version=17.1.9, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 14 04:53:43 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:53:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:53:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:53:58 localhost podman[95753]: 2025-10-14 08:53:58.545207229 +0000 UTC m=+0.088906020 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:04:03, vcs-type=git, release=2, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Oct 14 04:53:58 localhost podman[95753]: 2025-10-14 08:53:58.557804963 +0000 UTC m=+0.101503734 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 14 04:53:58 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:53:58 localhost podman[95754]: 2025-10-14 08:53:58.64625821 +0000 UTC m=+0.185679008 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, container_name=iscsid, managed_by=tripleo_ansible, release=1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:53:58 localhost podman[95754]: 2025-10-14 08:53:58.65494625 +0000 UTC m=+0.194367068 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, vcs-type=git) Oct 14 04:53:58 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:54:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:54:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:54:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:54:07 localhost systemd[1]: tmp-crun.LeDPWg.mount: Deactivated successfully. Oct 14 04:54:07 localhost podman[95792]: 2025-10-14 08:54:07.569261352 +0000 UTC m=+0.105900390 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 04:54:07 localhost podman[95793]: 2025-10-14 08:54:07.610942438 +0000 UTC m=+0.144961527 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:54:07 localhost podman[95793]: 2025-10-14 08:54:07.618138398 +0000 UTC m=+0.152157457 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-type=git, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 14 04:54:07 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:54:07 localhost podman[95792]: 2025-10-14 08:54:07.659194408 +0000 UTC m=+0.195833456 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:54:07 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:54:07 localhost podman[95794]: 2025-10-14 08:54:07.711240089 +0000 UTC m=+0.243156892 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9) Oct 14 04:54:07 localhost podman[95794]: 2025-10-14 08:54:07.743086953 +0000 UTC m=+0.275003806 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:54:07 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:54:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:54:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:54:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:54:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:54:09 localhost podman[95865]: 2025-10-14 08:54:09.556535809 +0000 UTC m=+0.092954747 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 04:54:09 localhost podman[95865]: 2025-10-14 08:54:09.573001946 +0000 UTC m=+0.109420874 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller) Oct 14 04:54:09 localhost podman[95865]: unhealthy Oct 14 04:54:09 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:54:09 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:54:09 localhost podman[95867]: 2025-10-14 08:54:09.624918353 +0000 UTC m=+0.154069128 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:54:09 localhost podman[95868]: 2025-10-14 08:54:09.67530075 +0000 UTC m=+0.203774677 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, build-date=2025-07-21T14:48:37, vcs-type=git) Oct 14 04:54:09 localhost podman[95867]: 2025-10-14 08:54:09.694411287 +0000 UTC m=+0.223562052 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true) Oct 14 04:54:09 localhost podman[95867]: unhealthy Oct 14 04:54:09 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:54:09 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:54:09 localhost podman[95866]: 2025-10-14 08:54:09.768539583 +0000 UTC m=+0.303952414 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc.) Oct 14 04:54:09 localhost podman[95868]: 2025-10-14 08:54:09.791219985 +0000 UTC m=+0.319693922 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:54:09 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:54:10 localhost podman[95866]: 2025-10-14 08:54:10.147313051 +0000 UTC m=+0.682725882 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 14 04:54:10 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:54:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:54:14 localhost podman[95951]: 2025-10-14 08:54:14.538140759 +0000 UTC m=+0.079363857 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, release=1, vendor=Red Hat, Inc.) Oct 14 04:54:14 localhost podman[95951]: 2025-10-14 08:54:14.755163865 +0000 UTC m=+0.296386954 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible) Oct 14 04:54:14 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:54:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:54:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:54:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:54:29 localhost recover_tripleo_nova_virtqemud[95994]: 62532 Oct 14 04:54:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:54:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:54:29 localhost podman[95981]: 2025-10-14 08:54:29.562887696 +0000 UTC m=+0.103343732 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2, version=17.1.9, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, container_name=collectd) Oct 14 04:54:29 localhost podman[95981]: 2025-10-14 08:54:29.575193382 +0000 UTC m=+0.115649378 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, release=2, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:54:29 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:54:29 localhost podman[95982]: 2025-10-14 08:54:29.66816144 +0000 UTC m=+0.206159011 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:54:29 localhost podman[95982]: 2025-10-14 08:54:29.681091912 +0000 UTC m=+0.219089523 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:54:29 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:54:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:54:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:54:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:54:38 localhost podman[96152]: 2025-10-14 08:54:38.58391457 +0000 UTC m=+0.114820177 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Oct 14 04:54:38 localhost podman[96152]: 2025-10-14 08:54:38.622032381 +0000 UTC m=+0.152937958 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, distribution-scope=public, release=1, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 14 04:54:38 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:54:38 localhost podman[96151]: 2025-10-14 08:54:38.644502188 +0000 UTC m=+0.176995776 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9) Oct 14 04:54:38 localhost podman[96151]: 2025-10-14 08:54:38.678173171 +0000 UTC m=+0.210666739 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:54:38 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:54:38 localhost podman[96153]: 2025-10-14 08:54:38.697570075 +0000 UTC m=+0.226606432 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git) Oct 14 04:54:38 localhost podman[96153]: 2025-10-14 08:54:38.75847965 +0000 UTC m=+0.287516017 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9) Oct 14 04:54:38 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:54:39 localhost systemd[1]: tmp-crun.fBqD5U.mount: Deactivated successfully. Oct 14 04:54:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:54:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:54:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:54:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:54:40 localhost systemd[1]: tmp-crun.VRdt59.mount: Deactivated successfully. Oct 14 04:54:40 localhost podman[96222]: 2025-10-14 08:54:40.550526479 +0000 UTC m=+0.089385743 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:54:40 localhost podman[96223]: 2025-10-14 08:54:40.603509444 +0000 UTC m=+0.139954864 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 14 04:54:40 localhost systemd[1]: tmp-crun.FqSCAu.mount: Deactivated successfully. Oct 14 04:54:40 localhost podman[96224]: 2025-10-14 08:54:40.667316477 +0000 UTC m=+0.198257900 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:54:40 localhost podman[96222]: 2025-10-14 08:54:40.684631146 +0000 UTC m=+0.223490400 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, build-date=2025-07-21T13:28:44, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Oct 14 04:54:40 localhost podman[96222]: unhealthy Oct 14 04:54:40 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:54:40 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:54:40 localhost podman[96224]: 2025-10-14 08:54:40.711287554 +0000 UTC m=+0.242228977 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 04:54:40 localhost podman[96224]: unhealthy Oct 14 04:54:40 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:54:40 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:54:40 localhost podman[96225]: 2025-10-14 08:54:40.583947405 +0000 UTC m=+0.112141836 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:54:40 localhost podman[96225]: 2025-10-14 08:54:40.769078096 +0000 UTC m=+0.297272527 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.9, release=1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:54:40 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:54:41 localhost podman[96223]: 2025-10-14 08:54:41.00407558 +0000 UTC m=+0.540521010 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:54:41 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:54:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:54:45 localhost systemd[1]: tmp-crun.WGmOL7.mount: Deactivated successfully. Oct 14 04:54:45 localhost podman[96312]: 2025-10-14 08:54:45.549092469 +0000 UTC m=+0.090550134 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 14 04:54:45 localhost podman[96312]: 2025-10-14 08:54:45.7617867 +0000 UTC m=+0.303244335 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-type=git, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:54:45 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:55:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:55:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:55:00 localhost podman[96341]: 2025-10-14 08:55:00.555126628 +0000 UTC m=+0.090047570 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, tcib_managed=true, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:55:00 localhost podman[96341]: 2025-10-14 08:55:00.589903121 +0000 UTC m=+0.124824053 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, build-date=2025-07-21T13:04:03, vcs-type=git, vendor=Red Hat, Inc.) Oct 14 04:55:00 localhost podman[96342]: 2025-10-14 08:55:00.604361534 +0000 UTC m=+0.135069344 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:55:00 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:55:00 localhost podman[96342]: 2025-10-14 08:55:00.643063521 +0000 UTC m=+0.173771311 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid) Oct 14 04:55:00 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:55:09 localhost podman[96379]: 2025-10-14 08:55:09.544224964 +0000 UTC m=+0.081922304 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:55:09 localhost podman[96379]: 2025-10-14 08:55:09.593600114 +0000 UTC m=+0.131297494 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:55:09 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:55:09 localhost podman[96381]: 2025-10-14 08:55:09.598014911 +0000 UTC m=+0.130086931 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 04:55:09 localhost systemd[1]: tmp-crun.wNthSa.mount: Deactivated successfully. Oct 14 04:55:09 localhost podman[96380]: 2025-10-14 08:55:09.656724958 +0000 UTC m=+0.190783191 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:55:09 localhost podman[96380]: 2025-10-14 08:55:09.663148739 +0000 UTC m=+0.197206992 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, version=17.1.9, vcs-type=git) Oct 14 04:55:09 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:55:09 localhost podman[96381]: 2025-10-14 08:55:09.678037584 +0000 UTC m=+0.210109604 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, tcib_managed=true) Oct 14 04:55:09 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:55:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:55:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:55:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:55:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:55:11 localhost podman[96452]: 2025-10-14 08:55:11.551342548 +0000 UTC m=+0.087954743 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=nova_migration_target, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:55:11 localhost systemd[1]: tmp-crun.NTEdOU.mount: Deactivated successfully. Oct 14 04:55:11 localhost podman[96451]: 2025-10-14 08:55:11.611660029 +0000 UTC m=+0.154151420 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 14 04:55:11 localhost podman[96451]: 2025-10-14 08:55:11.650613772 +0000 UTC m=+0.193105233 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 14 04:55:11 localhost podman[96453]: 2025-10-14 08:55:11.660758352 +0000 UTC m=+0.195674892 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 14 04:55:11 localhost podman[96453]: 2025-10-14 08:55:11.699474398 +0000 UTC m=+0.234390928 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9) Oct 14 04:55:11 localhost podman[96453]: unhealthy Oct 14 04:55:11 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:55:11 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:55:11 localhost podman[96451]: unhealthy Oct 14 04:55:11 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:55:11 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:55:11 localhost podman[96458]: 2025-10-14 08:55:11.741825222 +0000 UTC m=+0.272968122 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, vcs-type=git) Oct 14 04:55:11 localhost podman[96458]: 2025-10-14 08:55:11.773821311 +0000 UTC m=+0.304964211 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1) Oct 14 04:55:11 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:55:11 localhost podman[96452]: 2025-10-14 08:55:11.925104834 +0000 UTC m=+0.461717069 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 14 04:55:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:55:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:55:16 localhost systemd[1]: tmp-crun.zu8Lqv.mount: Deactivated successfully. Oct 14 04:55:16 localhost podman[96535]: 2025-10-14 08:55:16.559909372 +0000 UTC m=+0.099281294 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, release=1) Oct 14 04:55:16 localhost podman[96535]: 2025-10-14 08:55:16.758977882 +0000 UTC m=+0.298349804 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git, batch=17.1_20250721.1, container_name=metrics_qdr, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, architecture=x86_64) Oct 14 04:55:16 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:55:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:55:22 localhost recover_tripleo_nova_virtqemud[96564]: 62532 Oct 14 04:55:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:55:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:55:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:55:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:55:31 localhost podman[96565]: 2025-10-14 08:55:31.549181677 +0000 UTC m=+0.088688734 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, tcib_managed=true, distribution-scope=public, release=2, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, vcs-type=git, container_name=collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 14 04:55:31 localhost podman[96566]: 2025-10-14 08:55:31.595004882 +0000 UTC m=+0.130625575 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step3, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 04:55:31 localhost podman[96566]: 2025-10-14 08:55:31.607016881 +0000 UTC m=+0.142637564 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Oct 14 04:55:31 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:55:31 localhost podman[96565]: 2025-10-14 08:55:31.662683788 +0000 UTC m=+0.202190845 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3) Oct 14 04:55:31 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:55:40 localhost podman[96679]: 2025-10-14 08:55:40.548836595 +0000 UTC m=+0.085760657 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 04:55:40 localhost podman[96679]: 2025-10-14 08:55:40.59992251 +0000 UTC m=+0.136846642 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:55:40 localhost systemd[1]: tmp-crun.oRd40L.mount: Deactivated successfully. Oct 14 04:55:40 localhost podman[96681]: 2025-10-14 08:55:40.622473838 +0000 UTC m=+0.152134917 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1) Oct 14 04:55:40 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:55:40 localhost podman[96680]: 2025-10-14 08:55:40.666716542 +0000 UTC m=+0.199088463 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, tcib_managed=true, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:55:40 localhost podman[96680]: 2025-10-14 08:55:40.673986104 +0000 UTC m=+0.206358015 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 14 04:55:40 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:55:40 localhost podman[96681]: 2025-10-14 08:55:40.730519583 +0000 UTC m=+0.260180672 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:55:40 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:55:42 localhost podman[96750]: 2025-10-14 08:55:42.530785811 +0000 UTC m=+0.073031528 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 04:55:42 localhost systemd[1]: tmp-crun.Lqp8TU.mount: Deactivated successfully. Oct 14 04:55:42 localhost podman[96752]: 2025-10-14 08:55:42.591791309 +0000 UTC m=+0.127006529 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step5, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 14 04:55:42 localhost podman[96752]: 2025-10-14 08:55:42.623132751 +0000 UTC m=+0.158347981 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 04:55:42 localhost podman[96749]: 2025-10-14 08:55:42.636145796 +0000 UTC m=+0.176857692 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 14 04:55:42 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:55:42 localhost podman[96749]: 2025-10-14 08:55:42.657978875 +0000 UTC m=+0.198690781 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=) Oct 14 04:55:42 localhost podman[96751]: 2025-10-14 08:55:42.559831021 +0000 UTC m=+0.094955719 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 04:55:42 localhost podman[96751]: 2025-10-14 08:55:42.696926738 +0000 UTC m=+0.232051366 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 04:55:42 localhost podman[96751]: unhealthy Oct 14 04:55:42 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:55:42 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:55:42 localhost podman[96749]: unhealthy Oct 14 04:55:42 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:55:42 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:55:42 localhost podman[96750]: 2025-10-14 08:55:42.914292474 +0000 UTC m=+0.456538221 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:55:42 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:55:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:55:47 localhost systemd[1]: tmp-crun.UZiEL7.mount: Deactivated successfully. Oct 14 04:55:47 localhost podman[96833]: 2025-10-14 08:55:47.561885723 +0000 UTC m=+0.099437319 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:07:59, tcib_managed=true) Oct 14 04:55:47 localhost podman[96833]: 2025-10-14 08:55:47.808151956 +0000 UTC m=+0.345703492 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1) Oct 14 04:55:47 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:56:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:56:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:56:02 localhost podman[96862]: 2025-10-14 08:56:02.538347057 +0000 UTC m=+0.082528980 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20250721.1, vcs-type=git) Oct 14 04:56:02 localhost podman[96862]: 2025-10-14 08:56:02.552131453 +0000 UTC m=+0.096313426 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, container_name=collectd, release=2, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Oct 14 04:56:02 localhost podman[96863]: 2025-10-14 08:56:02.600611209 +0000 UTC m=+0.140726204 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:56:02 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:56:02 localhost podman[96863]: 2025-10-14 08:56:02.641335609 +0000 UTC m=+0.181450604 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:56:02 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:56:11 localhost systemd[1]: tmp-crun.ftwLm8.mount: Deactivated successfully. Oct 14 04:56:11 localhost podman[96903]: 2025-10-14 08:56:11.542700378 +0000 UTC m=+0.085359175 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, container_name=ceilometer_agent_compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33) Oct 14 04:56:11 localhost podman[96903]: 2025-10-14 08:56:11.60266839 +0000 UTC m=+0.145327187 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc.) Oct 14 04:56:11 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:56:11 localhost podman[96904]: 2025-10-14 08:56:11.646546233 +0000 UTC m=+0.185725938 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, container_name=logrotate_crond, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1) Oct 14 04:56:11 localhost podman[96905]: 2025-10-14 08:56:11.604546559 +0000 UTC m=+0.138996778 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 14 04:56:11 localhost podman[96904]: 2025-10-14 08:56:11.682058545 +0000 UTC m=+0.221238260 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, container_name=logrotate_crond, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 04:56:11 localhost podman[96905]: 2025-10-14 08:56:11.684324346 +0000 UTC m=+0.218774515 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 04:56:11 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:56:11 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:56:13 localhost podman[96977]: 2025-10-14 08:56:13.544490101 +0000 UTC m=+0.081541814 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4) Oct 14 04:56:13 localhost podman[96976]: 2025-10-14 08:56:13.593611794 +0000 UTC m=+0.132522716 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 14 04:56:13 localhost podman[96977]: 2025-10-14 08:56:13.613293456 +0000 UTC m=+0.150345169 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, architecture=x86_64, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 14 04:56:13 localhost podman[96977]: unhealthy Oct 14 04:56:13 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:56:13 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:56:13 localhost podman[96979]: 2025-10-14 08:56:13.705001459 +0000 UTC m=+0.238461767 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, architecture=x86_64, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 14 04:56:13 localhost podman[96979]: 2025-10-14 08:56:13.73405772 +0000 UTC m=+0.267518058 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, tcib_managed=true) Oct 14 04:56:13 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:56:13 localhost podman[96975]: 2025-10-14 08:56:13.747002893 +0000 UTC m=+0.289435558 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2025-07-21T13:28:44, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true) Oct 14 04:56:13 localhost podman[96975]: 2025-10-14 08:56:13.787443637 +0000 UTC m=+0.329876312 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, tcib_managed=true, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller) Oct 14 04:56:13 localhost podman[96975]: unhealthy Oct 14 04:56:13 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:56:13 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:56:14 localhost podman[96976]: 2025-10-14 08:56:14.005767378 +0000 UTC m=+0.544678310 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step4) Oct 14 04:56:14 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:56:14 localhost systemd[1]: tmp-crun.aL27L4.mount: Deactivated successfully. Oct 14 04:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:56:18 localhost systemd[1]: tmp-crun.0vgLui.mount: Deactivated successfully. Oct 14 04:56:18 localhost podman[97058]: 2025-10-14 08:56:18.539032903 +0000 UTC m=+0.084172514 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9) Oct 14 04:56:18 localhost podman[97058]: 2025-10-14 08:56:18.736188593 +0000 UTC m=+0.281328134 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:56:18 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:56:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:56:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:56:33 localhost podman[97088]: 2025-10-14 08:56:33.557926074 +0000 UTC m=+0.094408796 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, vcs-type=git) Oct 14 04:56:33 localhost podman[97089]: 2025-10-14 08:56:33.608276109 +0000 UTC m=+0.142907432 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9) Oct 14 04:56:33 localhost podman[97088]: 2025-10-14 08:56:33.621371457 +0000 UTC m=+0.157854139 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, config_id=tripleo_step3, release=2, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:56:33 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:56:33 localhost podman[97089]: 2025-10-14 08:56:33.642878637 +0000 UTC m=+0.177509960 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 04:56:33 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:56:36 localhost podman[97225]: 2025-10-14 08:56:36.579550929 +0000 UTC m=+0.101390281 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, name=rhceph, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-type=git) Oct 14 04:56:36 localhost podman[97225]: 2025-10-14 08:56:36.664848261 +0000 UTC m=+0.186687593 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:56:42 localhost podman[97368]: 2025-10-14 08:56:42.562563832 +0000 UTC m=+0.096737526 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:56:42 localhost podman[97368]: 2025-10-14 08:56:42.600105609 +0000 UTC m=+0.134279313 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:56:42 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:56:42 localhost podman[97369]: 2025-10-14 08:56:42.616554885 +0000 UTC m=+0.145688155 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, release=1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:56:42 localhost podman[97370]: 2025-10-14 08:56:42.663894671 +0000 UTC m=+0.192202970 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:56:42 localhost podman[97369]: 2025-10-14 08:56:42.68233364 +0000 UTC m=+0.211466950 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52) Oct 14 04:56:42 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:56:42 localhost podman[97370]: 2025-10-14 08:56:42.726226145 +0000 UTC m=+0.254534394 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi) Oct 14 04:56:42 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:56:44 localhost podman[97440]: 2025-10-14 08:56:44.578985943 +0000 UTC m=+0.115023582 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, release=1, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=) Oct 14 04:56:44 localhost podman[97440]: 2025-10-14 08:56:44.620990707 +0000 UTC m=+0.157028336 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:28:44, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 14 04:56:44 localhost podman[97440]: unhealthy Oct 14 04:56:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:56:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:56:44 localhost podman[97441]: 2025-10-14 08:56:44.631698192 +0000 UTC m=+0.166132168 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=nova_migration_target, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, tcib_managed=true) Oct 14 04:56:44 localhost systemd[1]: tmp-crun.kXZuRr.mount: Deactivated successfully. Oct 14 04:56:44 localhost podman[97443]: 2025-10-14 08:56:44.698934875 +0000 UTC m=+0.225471432 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37) Oct 14 04:56:44 localhost podman[97443]: 2025-10-14 08:56:44.741539235 +0000 UTC m=+0.268075832 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, config_id=tripleo_step5, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team) Oct 14 04:56:44 localhost podman[97442]: 2025-10-14 08:56:44.748202242 +0000 UTC m=+0.277479781 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 14 04:56:44 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:56:44 localhost podman[97442]: 2025-10-14 08:56:44.792079486 +0000 UTC m=+0.321357045 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, release=1, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 14 04:56:44 localhost podman[97442]: unhealthy Oct 14 04:56:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:56:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:56:44 localhost podman[97441]: 2025-10-14 08:56:44.998148413 +0000 UTC m=+0.532582399 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 14 04:56:45 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:56:45 localhost systemd[1]: tmp-crun.JamSGh.mount: Deactivated successfully. Oct 14 04:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:56:49 localhost podman[97531]: 2025-10-14 08:56:49.58652643 +0000 UTC m=+0.120345813 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 14 04:56:49 localhost podman[97531]: 2025-10-14 08:56:49.778372289 +0000 UTC m=+0.312191682 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, release=1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:56:49 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:57:04 localhost podman[97562]: 2025-10-14 08:57:04.537577393 +0000 UTC m=+0.073516731 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 04:57:04 localhost podman[97562]: 2025-10-14 08:57:04.553259098 +0000 UTC m=+0.089198446 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:57:04 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:57:04 localhost systemd[1]: tmp-crun.zad7vR.mount: Deactivated successfully. Oct 14 04:57:04 localhost podman[97561]: 2025-10-14 08:57:04.648966158 +0000 UTC m=+0.187715311 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git) Oct 14 04:57:04 localhost podman[97561]: 2025-10-14 08:57:04.687075378 +0000 UTC m=+0.225824521 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, batch=17.1_20250721.1, release=2, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.expose-services=) Oct 14 04:57:04 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:57:13 localhost podman[97600]: 2025-10-14 08:57:13.552906104 +0000 UTC m=+0.091943721 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-type=git, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.33.12) Oct 14 04:57:13 localhost podman[97600]: 2025-10-14 08:57:13.587927242 +0000 UTC m=+0.126964899 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 04:57:13 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:57:13 localhost podman[97602]: 2025-10-14 08:57:13.666467586 +0000 UTC m=+0.196461923 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Oct 14 04:57:13 localhost podman[97601]: 2025-10-14 08:57:13.718312241 +0000 UTC m=+0.250631820 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 04:57:13 localhost podman[97602]: 2025-10-14 08:57:13.722268616 +0000 UTC m=+0.252262953 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public) Oct 14 04:57:13 localhost podman[97601]: 2025-10-14 08:57:13.731271626 +0000 UTC m=+0.263591215 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., release=1) Oct 14 04:57:13 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:57:13 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:57:15 localhost systemd[1]: tmp-crun.wAXPp1.mount: Deactivated successfully. Oct 14 04:57:15 localhost podman[97670]: 2025-10-14 08:57:15.549882108 +0000 UTC m=+0.088015356 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ovn_controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true) Oct 14 04:57:15 localhost podman[97678]: 2025-10-14 08:57:15.613396173 +0000 UTC m=+0.138755612 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Oct 14 04:57:15 localhost podman[97671]: 2025-10-14 08:57:15.578707863 +0000 UTC m=+0.109367832 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:57:15 localhost podman[97670]: 2025-10-14 08:57:15.629823888 +0000 UTC m=+0.167957126 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git) Oct 14 04:57:15 localhost podman[97672]: 2025-10-14 08:57:15.669288965 +0000 UTC m=+0.196571905 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:57:15 localhost podman[97670]: unhealthy Oct 14 04:57:15 localhost podman[97678]: 2025-10-14 08:57:15.691287959 +0000 UTC m=+0.216647408 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, version=17.1.9) Oct 14 04:57:15 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:57:15 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:57:15 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:57:15 localhost podman[97672]: 2025-10-14 08:57:15.712128132 +0000 UTC m=+0.239411082 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:57:15 localhost podman[97672]: unhealthy Oct 14 04:57:15 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:57:15 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:57:15 localhost podman[97671]: 2025-10-14 08:57:15.932226801 +0000 UTC m=+0.462886760 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 04:57:15 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:57:20 localhost podman[97753]: 2025-10-14 08:57:20.538821212 +0000 UTC m=+0.078061532 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 04:57:20 localhost podman[97753]: 2025-10-14 08:57:20.73365719 +0000 UTC m=+0.272897520 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12) Oct 14 04:57:20 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:57:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:57:22 localhost recover_tripleo_nova_virtqemud[97784]: 62532 Oct 14 04:57:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:57:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:57:35 localhost sshd[97785]: main: sshd: ssh-rsa algorithm is disabled Oct 14 04:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:57:35 localhost systemd[1]: tmp-crun.u7yh4r.mount: Deactivated successfully. Oct 14 04:57:35 localhost podman[97787]: 2025-10-14 08:57:35.47350178 +0000 UTC m=+0.086873165 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, release=2, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container) Oct 14 04:57:35 localhost podman[97788]: 2025-10-14 08:57:35.532075504 +0000 UTC m=+0.140497808 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, vcs-type=git, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:57:35 localhost podman[97787]: 2025-10-14 08:57:35.559188453 +0000 UTC m=+0.172559848 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=collectd) Oct 14 04:57:35 localhost podman[97788]: 2025-10-14 08:57:35.567187795 +0000 UTC m=+0.175610119 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=iscsid, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12) Oct 14 04:57:35 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:57:35 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:57:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:57:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:57:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:57:44 localhost systemd[1]: tmp-crun.4vb4I6.mount: Deactivated successfully. Oct 14 04:57:44 localhost systemd[1]: tmp-crun.pXpq0B.mount: Deactivated successfully. Oct 14 04:57:44 localhost podman[97900]: 2025-10-14 08:57:44.590079119 +0000 UTC m=+0.124425731 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9) Oct 14 04:57:44 localhost podman[97902]: 2025-10-14 08:57:44.652464383 +0000 UTC m=+0.184147525 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, build-date=2025-07-21T15:29:47, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git) Oct 14 04:57:44 localhost podman[97901]: 2025-10-14 08:57:44.623834884 +0000 UTC m=+0.158372961 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:57:44 localhost podman[97901]: 2025-10-14 08:57:44.707114044 +0000 UTC m=+0.241652121 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:57:44 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:57:44 localhost podman[97902]: 2025-10-14 08:57:44.759603786 +0000 UTC m=+0.291286908 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, build-date=2025-07-21T15:29:47, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 04:57:44 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:57:44 localhost podman[97900]: 2025-10-14 08:57:44.827711833 +0000 UTC m=+0.362058445 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:57:44 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:57:46 localhost podman[97975]: 2025-10-14 08:57:46.544506505 +0000 UTC m=+0.081025940 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:57:46 localhost systemd[1]: tmp-crun.kptW63.mount: Deactivated successfully. Oct 14 04:57:46 localhost podman[97976]: 2025-10-14 08:57:46.608879823 +0000 UTC m=+0.142713967 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step4, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 14 04:57:46 localhost podman[97977]: 2025-10-14 08:57:46.656600479 +0000 UTC m=+0.185725238 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true) Oct 14 04:57:46 localhost podman[97977]: 2025-10-14 08:57:46.680195694 +0000 UTC m=+0.209320443 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=) Oct 14 04:57:46 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:57:46 localhost podman[97976]: 2025-10-14 08:57:46.740179176 +0000 UTC m=+0.274013250 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible) Oct 14 04:57:46 localhost podman[97976]: unhealthy Oct 14 04:57:46 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:57:46 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:57:46 localhost podman[97974]: 2025-10-14 08:57:46.820349222 +0000 UTC m=+0.358324706 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:57:46 localhost podman[97974]: 2025-10-14 08:57:46.837047705 +0000 UTC m=+0.375023189 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=) Oct 14 04:57:46 localhost podman[97974]: unhealthy Oct 14 04:57:46 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:57:46 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:57:46 localhost podman[97975]: 2025-10-14 08:57:46.886417215 +0000 UTC m=+0.422936660 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, release=1) Oct 14 04:57:46 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:57:47 localhost systemd[1]: tmp-crun.oXmer4.mount: Deactivated successfully. Oct 14 04:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:57:51 localhost podman[98055]: 2025-10-14 08:57:51.538585166 +0000 UTC m=+0.083031754 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, release=1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:57:51 localhost podman[98055]: 2025-10-14 08:57:51.73776636 +0000 UTC m=+0.282212898 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 14 04:57:51 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:58:06 localhost systemd[1]: tmp-crun.jaTdvs.mount: Deactivated successfully. Oct 14 04:58:06 localhost podman[98085]: 2025-10-14 08:58:06.571873839 +0000 UTC m=+0.106440595 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:58:06 localhost podman[98084]: 2025-10-14 08:58:06.543526917 +0000 UTC m=+0.082634694 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, release=2, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:58:06 localhost podman[98085]: 2025-10-14 08:58:06.609293491 +0000 UTC m=+0.143860297 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:58:06 localhost podman[98084]: 2025-10-14 08:58:06.624063384 +0000 UTC m=+0.163171181 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 14 04:58:06 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:58:06 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:58:15 localhost podman[98125]: 2025-10-14 08:58:15.547537787 +0000 UTC m=+0.082995513 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron) Oct 14 04:58:15 localhost podman[98125]: 2025-10-14 08:58:15.552079478 +0000 UTC m=+0.087537164 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Oct 14 04:58:15 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:58:15 localhost systemd[1]: tmp-crun.u8nE4s.mount: Deactivated successfully. Oct 14 04:58:15 localhost podman[98124]: 2025-10-14 08:58:15.600684577 +0000 UTC m=+0.141067173 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9) Oct 14 04:58:15 localhost podman[98126]: 2025-10-14 08:58:15.642838825 +0000 UTC m=+0.176708558 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi) Oct 14 04:58:15 localhost podman[98124]: 2025-10-14 08:58:15.65920847 +0000 UTC m=+0.199591046 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 04:58:15 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:58:15 localhost podman[98126]: 2025-10-14 08:58:15.672668687 +0000 UTC m=+0.206538510 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, distribution-scope=public) Oct 14 04:58:15 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:58:17 localhost podman[98202]: 2025-10-14 08:58:17.577825066 +0000 UTC m=+0.107676238 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, tcib_managed=true, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:58:17 localhost podman[98199]: 2025-10-14 08:58:17.548678982 +0000 UTC m=+0.089303860 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., release=1) Oct 14 04:58:17 localhost podman[98202]: 2025-10-14 08:58:17.655117055 +0000 UTC m=+0.184968207 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37) Oct 14 04:58:17 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:58:17 localhost podman[98200]: 2025-10-14 08:58:17.706084607 +0000 UTC m=+0.246076568 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1) Oct 14 04:58:17 localhost podman[98199]: 2025-10-14 08:58:17.73331791 +0000 UTC m=+0.273942728 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc.) Oct 14 04:58:17 localhost podman[98199]: unhealthy Oct 14 04:58:17 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:58:17 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:58:17 localhost podman[98201]: 2025-10-14 08:58:17.658225778 +0000 UTC m=+0.192033555 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true) Oct 14 04:58:17 localhost podman[98201]: 2025-10-14 08:58:17.787147628 +0000 UTC m=+0.320955325 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 14 04:58:17 localhost podman[98201]: unhealthy Oct 14 04:58:17 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:58:17 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:58:18 localhost podman[98200]: 2025-10-14 08:58:18.053850463 +0000 UTC m=+0.593842424 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:58:18 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:58:22 localhost podman[98291]: 2025-10-14 08:58:22.540707568 +0000 UTC m=+0.082932991 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=metrics_qdr, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:58:22 localhost podman[98291]: 2025-10-14 08:58:22.752522107 +0000 UTC m=+0.294747480 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 14 04:58:22 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:58:37 localhost systemd[1]: tmp-crun.QUbLMU.mount: Deactivated successfully. Oct 14 04:58:37 localhost podman[98320]: 2025-10-14 08:58:37.539426444 +0000 UTC m=+0.080262520 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, version=17.1.9, name=rhosp17/openstack-collectd) Oct 14 04:58:37 localhost podman[98320]: 2025-10-14 08:58:37.573851498 +0000 UTC m=+0.114687574 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 14 04:58:37 localhost podman[98321]: 2025-10-14 08:58:37.592231945 +0000 UTC m=+0.130103222 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:58:37 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:58:37 localhost podman[98321]: 2025-10-14 08:58:37.60033939 +0000 UTC m=+0.138210617 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, release=1, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 04:58:37 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:58:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:58:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:58:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:58:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 04:58:46 localhost recover_tripleo_nova_virtqemud[98454]: 62532 Oct 14 04:58:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 04:58:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 04:58:46 localhost systemd[1]: tmp-crun.IpR3Zv.mount: Deactivated successfully. Oct 14 04:58:46 localhost podman[98437]: 2025-10-14 08:58:46.589054797 +0000 UTC m=+0.126805725 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:58:46 localhost podman[98437]: 2025-10-14 08:58:46.627160567 +0000 UTC m=+0.164911525 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9) Oct 14 04:58:46 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:58:46 localhost podman[98435]: 2025-10-14 08:58:46.641683543 +0000 UTC m=+0.179226635 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, release=1, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4) Oct 14 04:58:46 localhost podman[98436]: 2025-10-14 08:58:46.565307767 +0000 UTC m=+0.103367043 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, container_name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 04:58:46 localhost podman[98435]: 2025-10-14 08:58:46.679189538 +0000 UTC m=+0.216732660 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Oct 14 04:58:46 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:58:46 localhost podman[98436]: 2025-10-14 08:58:46.699193639 +0000 UTC m=+0.237252945 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 04:58:46 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:58:48 localhost systemd[1]: tmp-crun.318I2d.mount: Deactivated successfully. Oct 14 04:58:48 localhost podman[98507]: 2025-10-14 08:58:48.543502493 +0000 UTC m=+0.082468918 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, release=1) Oct 14 04:58:48 localhost podman[98513]: 2025-10-14 08:58:48.611963599 +0000 UTC m=+0.138454335 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, batch=17.1_20250721.1, container_name=nova_compute, release=1, architecture=x86_64) Oct 14 04:58:48 localhost podman[98508]: 2025-10-14 08:58:48.567795868 +0000 UTC m=+0.099721837 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:58:48 localhost podman[98507]: 2025-10-14 08:58:48.626951977 +0000 UTC m=+0.165918402 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, tcib_managed=true, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9) Oct 14 04:58:48 localhost podman[98507]: unhealthy Oct 14 04:58:48 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:58:48 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:58:48 localhost podman[98513]: 2025-10-14 08:58:48.668009976 +0000 UTC m=+0.194500711 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, vcs-type=git, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:58:48 localhost podman[98509]: 2025-10-14 08:58:48.675096584 +0000 UTC m=+0.205526903 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, version=17.1.9, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 04:58:48 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:58:48 localhost podman[98509]: 2025-10-14 08:58:48.69002487 +0000 UTC m=+0.220455169 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12) Oct 14 04:58:48 localhost podman[98509]: unhealthy Oct 14 04:58:48 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:58:48 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:58:48 localhost podman[98508]: 2025-10-14 08:58:48.931867415 +0000 UTC m=+0.463793344 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Oct 14 04:58:48 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:58:53 localhost podman[98595]: 2025-10-14 08:58:53.544963159 +0000 UTC m=+0.084895234 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, config_id=tripleo_step1, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible) Oct 14 04:58:53 localhost podman[98595]: 2025-10-14 08:58:53.770287836 +0000 UTC m=+0.310219871 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1) Oct 14 04:58:53 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:59:08 localhost systemd[1]: tmp-crun.025qDz.mount: Deactivated successfully. Oct 14 04:59:08 localhost podman[98624]: 2025-10-14 08:59:08.567920624 +0000 UTC m=+0.105475891 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git) Oct 14 04:59:08 localhost podman[98624]: 2025-10-14 08:59:08.604229299 +0000 UTC m=+0.141784586 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 14 04:59:08 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:59:08 localhost podman[98625]: 2025-10-14 08:59:08.658819144 +0000 UTC m=+0.190363100 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 04:59:08 localhost podman[98625]: 2025-10-14 08:59:08.673300593 +0000 UTC m=+0.204844539 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible) Oct 14 04:59:08 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:59:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:59:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:59:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:59:17 localhost podman[98666]: 2025-10-14 08:59:17.543516006 +0000 UTC m=+0.077384798 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:29:47) Oct 14 04:59:17 localhost podman[98666]: 2025-10-14 08:59:17.59508306 +0000 UTC m=+0.128951872 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 04:59:17 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:59:17 localhost podman[98665]: 2025-10-14 08:59:17.647305121 +0000 UTC m=+0.183853745 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond) Oct 14 04:59:17 localhost podman[98664]: 2025-10-14 08:59:17.599088488 +0000 UTC m=+0.139189187 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, container_name=ceilometer_agent_compute) Oct 14 04:59:17 localhost podman[98664]: 2025-10-14 08:59:17.679543267 +0000 UTC m=+0.219644006 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:59:17 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:59:17 localhost podman[98665]: 2025-10-14 08:59:17.736639549 +0000 UTC m=+0.273188163 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 04:59:17 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:59:19 localhost podman[98735]: 2025-10-14 08:59:19.551617055 +0000 UTC m=+0.088724192 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 04:59:19 localhost podman[98735]: 2025-10-14 08:59:19.566279528 +0000 UTC m=+0.103386675 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 04:59:19 localhost podman[98736]: 2025-10-14 08:59:19.607232968 +0000 UTC m=+0.141452557 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, version=17.1.9, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:59:19 localhost podman[98735]: unhealthy Oct 14 04:59:19 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:59:19 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:59:19 localhost podman[98737]: 2025-10-14 08:59:19.698138598 +0000 UTC m=+0.228817382 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Oct 14 04:59:19 localhost podman[98738]: 2025-10-14 08:59:19.769226506 +0000 UTC m=+0.296901460 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.9, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 04:59:19 localhost podman[98737]: 2025-10-14 08:59:19.787223109 +0000 UTC m=+0.317901843 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 04:59:19 localhost podman[98737]: unhealthy Oct 14 04:59:19 localhost podman[98738]: 2025-10-14 08:59:19.797064443 +0000 UTC m=+0.324739387 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, version=17.1.9, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37) Oct 14 04:59:19 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:59:19 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:59:19 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:59:19 localhost podman[98736]: 2025-10-14 08:59:19.996427474 +0000 UTC m=+0.530647073 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Oct 14 04:59:20 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:59:20 localhost systemd[1]: tmp-crun.TYt61o.mount: Deactivated successfully. Oct 14 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:59:24 localhost podman[98818]: 2025-10-14 08:59:24.554704082 +0000 UTC m=+0.095645668 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, version=17.1.9) Oct 14 04:59:24 localhost podman[98818]: 2025-10-14 08:59:24.750324933 +0000 UTC m=+0.291266479 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 04:59:24 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 04:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 04:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 04:59:39 localhost podman[98848]: 2025-10-14 08:59:39.547881 +0000 UTC m=+0.083948045 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git) Oct 14 04:59:39 localhost podman[98848]: 2025-10-14 08:59:39.560985672 +0000 UTC m=+0.097052647 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, batch=17.1_20250721.1, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 04:59:39 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 04:59:39 localhost podman[98847]: 2025-10-14 08:59:39.607078158 +0000 UTC m=+0.143625165 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-collectd) Oct 14 04:59:39 localhost podman[98847]: 2025-10-14 08:59:39.647276898 +0000 UTC m=+0.183823885 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, release=2, build-date=2025-07-21T13:04:03, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 04:59:39 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 04:59:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 04:59:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 04:59:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 04:59:48 localhost podman[98966]: 2025-10-14 08:59:48.573409812 +0000 UTC m=+0.073364520 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9) Oct 14 04:59:48 localhost podman[98966]: 2025-10-14 08:59:48.61505737 +0000 UTC m=+0.115012058 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container) Oct 14 04:59:48 localhost systemd[1]: tmp-crun.vybplb.mount: Deactivated successfully. Oct 14 04:59:48 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 04:59:48 localhost podman[98965]: 2025-10-14 08:59:48.636810574 +0000 UTC m=+0.138344455 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 04:59:48 localhost podman[98967]: 2025-10-14 08:59:48.676104258 +0000 UTC m=+0.171214426 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:59:48 localhost podman[98965]: 2025-10-14 08:59:48.692167029 +0000 UTC m=+0.193700900 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, release=1) Oct 14 04:59:48 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 04:59:48 localhost podman[98967]: 2025-10-14 08:59:48.736385836 +0000 UTC m=+0.231495994 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 04:59:48 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 04:59:50 localhost podman[99037]: 2025-10-14 08:59:50.549282005 +0000 UTC m=+0.083847942 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:59:50 localhost podman[99038]: 2025-10-14 08:59:50.611935987 +0000 UTC m=+0.143528284 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, io.openshift.expose-services=) Oct 14 04:59:50 localhost podman[99038]: 2025-10-14 08:59:50.627185856 +0000 UTC m=+0.158778143 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 04:59:50 localhost podman[99038]: unhealthy Oct 14 04:59:50 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:59:50 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 04:59:50 localhost systemd[1]: tmp-crun.UcI5bh.mount: Deactivated successfully. Oct 14 04:59:50 localhost podman[99036]: 2025-10-14 08:59:50.717366046 +0000 UTC m=+0.254075219 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 14 04:59:50 localhost podman[99039]: 2025-10-14 08:59:50.767137713 +0000 UTC m=+0.292705987 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, container_name=nova_compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 04:59:50 localhost podman[99036]: 2025-10-14 08:59:50.782035112 +0000 UTC m=+0.318744335 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 04:59:50 localhost podman[99036]: unhealthy Oct 14 04:59:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 04:59:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 04:59:50 localhost podman[99039]: 2025-10-14 08:59:50.793926342 +0000 UTC m=+0.319494546 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:59:50 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 04:59:51 localhost podman[99037]: 2025-10-14 08:59:51.020618057 +0000 UTC m=+0.555183984 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 04:59:51 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 04:59:51 localhost systemd[1]: tmp-crun.sFT4j6.mount: Deactivated successfully. Oct 14 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 04:59:55 localhost podman[99118]: 2025-10-14 08:59:55.554704234 +0000 UTC m=+0.096023448 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Oct 14 04:59:55 localhost podman[99118]: 2025-10-14 08:59:55.777251018 +0000 UTC m=+0.318570202 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=) Oct 14 04:59:55 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:00:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:00:02 localhost recover_tripleo_nova_virtqemud[99152]: 62532 Oct 14 05:00:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:00:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:00:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:00:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:00:10 localhost podman[99154]: 2025-10-14 09:00:10.607179453 +0000 UTC m=+0.141722124 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12) Oct 14 05:00:10 localhost podman[99153]: 2025-10-14 09:00:10.576644293 +0000 UTC m=+0.111371009 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, release=2, batch=17.1_20250721.1, container_name=collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:00:10 localhost podman[99154]: 2025-10-14 09:00:10.642985324 +0000 UTC m=+0.177527975 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1) Oct 14 05:00:10 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:00:10 localhost podman[99153]: 2025-10-14 09:00:10.658226664 +0000 UTC m=+0.192953340 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, release=2, vcs-type=git) Oct 14 05:00:10 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:00:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:00:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:00:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:00:19 localhost podman[99192]: 2025-10-14 09:00:19.55356129 +0000 UTC m=+0.086626647 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Oct 14 05:00:19 localhost podman[99192]: 2025-10-14 09:00:19.567384071 +0000 UTC m=+0.100449428 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:00:19 localhost podman[99191]: 2025-10-14 09:00:19.599793 +0000 UTC m=+0.135812326 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true) Oct 14 05:00:19 localhost podman[99191]: 2025-10-14 09:00:19.62398992 +0000 UTC m=+0.160009176 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 05:00:19 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:00:19 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:00:19 localhost podman[99193]: 2025-10-14 09:00:19.709399373 +0000 UTC m=+0.237963888 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git) Oct 14 05:00:19 localhost podman[99193]: 2025-10-14 09:00:19.740013534 +0000 UTC m=+0.268578049 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:29:47, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 14 05:00:19 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:00:21 localhost podman[99266]: 2025-10-14 09:00:21.563462556 +0000 UTC m=+0.094373913 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9) Oct 14 05:00:21 localhost systemd[1]: tmp-crun.UrrnHf.mount: Deactivated successfully. Oct 14 05:00:21 localhost podman[99266]: 2025-10-14 09:00:21.609210985 +0000 UTC m=+0.140122352 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1) Oct 14 05:00:21 localhost podman[99264]: 2025-10-14 09:00:21.611814245 +0000 UTC m=+0.150134341 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:00:21 localhost podman[99264]: 2025-10-14 09:00:21.629168991 +0000 UTC m=+0.167489087 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 05:00:21 localhost podman[99264]: unhealthy Oct 14 05:00:21 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:00:21 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:00:21 localhost podman[99266]: unhealthy Oct 14 05:00:21 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:00:21 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:00:21 localhost podman[99265]: 2025-10-14 09:00:21.71258028 +0000 UTC m=+0.246116117 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target) Oct 14 05:00:21 localhost podman[99270]: 2025-10-14 09:00:21.762497029 +0000 UTC m=+0.289598914 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, release=1) Oct 14 05:00:21 localhost podman[99270]: 2025-10-14 09:00:21.789054612 +0000 UTC m=+0.316156547 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 14 05:00:21 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:00:22 localhost podman[99265]: 2025-10-14 09:00:22.119193853 +0000 UTC m=+0.652729710 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1) Oct 14 05:00:22 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:00:22 localhost systemd[1]: tmp-crun.GfCWqV.mount: Deactivated successfully. Oct 14 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:00:26 localhost podman[99353]: 2025-10-14 09:00:26.550671218 +0000 UTC m=+0.083522963 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:07:59, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9) Oct 14 05:00:26 localhost podman[99353]: 2025-10-14 09:00:26.70721309 +0000 UTC m=+0.240064835 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 14 05:00:26 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:00:31 localhost sshd[99383]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:00:31 localhost sshd[99384]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:00:39 localhost systemd[1]: session-28.scope: Deactivated successfully. Oct 14 05:00:39 localhost systemd[1]: session-28.scope: Consumed 7min 25.832s CPU time. Oct 14 05:00:39 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Oct 14 05:00:39 localhost systemd-logind[760]: Removed session 28. Oct 14 05:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:00:41 localhost podman[99385]: 2025-10-14 09:00:41.550860373 +0000 UTC m=+0.093152401 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, release=2, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=) Oct 14 05:00:41 localhost podman[99385]: 2025-10-14 09:00:41.5652666 +0000 UTC m=+0.107558638 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=2, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, container_name=collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 14 05:00:41 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:00:41 localhost podman[99386]: 2025-10-14 09:00:41.655704268 +0000 UTC m=+0.193446673 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true) Oct 14 05:00:41 localhost podman[99386]: 2025-10-14 09:00:41.665367737 +0000 UTC m=+0.203110102 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:00:41 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:00:49 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 14 05:00:49 localhost systemd[35465]: Activating special unit Exit the Session... Oct 14 05:00:49 localhost systemd[35465]: Removed slice User Background Tasks Slice. Oct 14 05:00:49 localhost systemd[35465]: Stopped target Main User Target. Oct 14 05:00:49 localhost systemd[35465]: Stopped target Basic System. Oct 14 05:00:49 localhost systemd[35465]: Stopped target Paths. Oct 14 05:00:49 localhost systemd[35465]: Stopped target Sockets. Oct 14 05:00:49 localhost systemd[35465]: Stopped target Timers. Oct 14 05:00:49 localhost systemd[35465]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 14 05:00:49 localhost systemd[35465]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 05:00:49 localhost systemd[35465]: Closed D-Bus User Message Bus Socket. Oct 14 05:00:49 localhost systemd[35465]: Stopped Create User's Volatile Files and Directories. Oct 14 05:00:49 localhost systemd[35465]: Removed slice User Application Slice. Oct 14 05:00:49 localhost systemd[35465]: Reached target Shutdown. Oct 14 05:00:49 localhost systemd[35465]: Finished Exit the Session. Oct 14 05:00:49 localhost systemd[35465]: Reached target Exit the Session. Oct 14 05:00:49 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 14 05:00:49 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 14 05:00:49 localhost systemd[1]: user@1003.service: Consumed 5.046s CPU time, read 0B from disk, written 7.0K to disk. Oct 14 05:00:49 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 14 05:00:49 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 14 05:00:49 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 14 05:00:49 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 14 05:00:49 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 14 05:00:49 localhost systemd[1]: user-1003.slice: Consumed 7min 30.913s CPU time. Oct 14 05:00:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:00:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:00:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:00:50 localhost podman[99502]: 2025-10-14 09:00:50.54116324 +0000 UTC m=+0.086024329 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, release=1, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 14 05:00:50 localhost podman[99502]: 2025-10-14 09:00:50.570364314 +0000 UTC m=+0.115225383 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, tcib_managed=true, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1) Oct 14 05:00:50 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:00:50 localhost podman[99503]: 2025-10-14 09:00:50.654704008 +0000 UTC m=+0.194257845 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.openshift.expose-services=) Oct 14 05:00:50 localhost podman[99503]: 2025-10-14 09:00:50.690202741 +0000 UTC m=+0.229756518 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, container_name=logrotate_crond) Oct 14 05:00:50 localhost podman[99504]: 2025-10-14 09:00:50.7006278 +0000 UTC m=+0.238691797 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:00:50 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:00:50 localhost podman[99504]: 2025-10-14 09:00:50.72407863 +0000 UTC m=+0.262142687 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Oct 14 05:00:50 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:00:52 localhost podman[99578]: 2025-10-14 09:00:52.552905338 +0000 UTC m=+0.088155938 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:00:52 localhost podman[99576]: 2025-10-14 09:00:52.611426027 +0000 UTC m=+0.149253636 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Oct 14 05:00:52 localhost podman[99578]: 2025-10-14 09:00:52.63756712 +0000 UTC m=+0.172817750 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 05:00:52 localhost podman[99578]: unhealthy Oct 14 05:00:52 localhost podman[99576]: 2025-10-14 09:00:52.653157278 +0000 UTC m=+0.190984907 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.33.12, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, vcs-type=git, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:00:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:00:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:00:52 localhost podman[99576]: unhealthy Oct 14 05:00:52 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:00:52 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:00:52 localhost podman[99579]: 2025-10-14 09:00:52.578466984 +0000 UTC m=+0.106001877 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, config_id=tripleo_step5, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:00:52 localhost podman[99577]: 2025-10-14 09:00:52.655533842 +0000 UTC m=+0.193075374 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, version=17.1.9, managed_by=tripleo_ansible, container_name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 05:00:52 localhost podman[99579]: 2025-10-14 09:00:52.76311895 +0000 UTC m=+0.290653863 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:00:52 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:00:53 localhost podman[99577]: 2025-10-14 09:00:53.023847508 +0000 UTC m=+0.561389010 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public) Oct 14 05:00:53 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:00:57 localhost podman[99665]: 2025-10-14 09:00:57.547532426 +0000 UTC m=+0.088511377 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.openshift.expose-services=, distribution-scope=public) Oct 14 05:00:57 localhost podman[99665]: 2025-10-14 09:00:57.762206018 +0000 UTC m=+0.303184959 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 05:00:57 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:01:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:01:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:01:12 localhost podman[99723]: 2025-10-14 09:01:12.558529783 +0000 UTC m=+0.085634700 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 05:01:12 localhost podman[99723]: 2025-10-14 09:01:12.572033276 +0000 UTC m=+0.099138183 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public) Oct 14 05:01:12 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:01:12 localhost podman[99722]: 2025-10-14 09:01:12.659774581 +0000 UTC m=+0.189066656 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, release=2, config_id=tripleo_step3) Oct 14 05:01:12 localhost podman[99722]: 2025-10-14 09:01:12.698453319 +0000 UTC m=+0.227745334 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Oct 14 05:01:12 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:01:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:01:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:01:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:01:21 localhost podman[99760]: 2025-10-14 09:01:21.553632487 +0000 UTC m=+0.092893232 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 05:01:21 localhost podman[99760]: 2025-10-14 09:01:21.587304361 +0000 UTC m=+0.126565106 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:01:21 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:01:21 localhost podman[99762]: 2025-10-14 09:01:21.609108577 +0000 UTC m=+0.143612635 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git) Oct 14 05:01:21 localhost podman[99762]: 2025-10-14 09:01:21.640467018 +0000 UTC m=+0.174971126 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, release=1, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 05:01:21 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:01:21 localhost podman[99761]: 2025-10-14 09:01:21.660403903 +0000 UTC m=+0.195783614 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:01:21 localhost podman[99761]: 2025-10-14 09:01:21.697273073 +0000 UTC m=+0.232652754 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-cron) Oct 14 05:01:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:01:23 localhost podman[99831]: 2025-10-14 09:01:23.540814065 +0000 UTC m=+0.081800017 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 05:01:23 localhost systemd[1]: tmp-crun.UFd5NO.mount: Deactivated successfully. Oct 14 05:01:23 localhost podman[99832]: 2025-10-14 09:01:23.613507286 +0000 UTC m=+0.149345449 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 14 05:01:23 localhost podman[99831]: 2025-10-14 09:01:23.621015218 +0000 UTC m=+0.162001170 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 14 05:01:23 localhost podman[99831]: unhealthy Oct 14 05:01:23 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:01:23 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:01:23 localhost podman[99833]: 2025-10-14 09:01:23.666772776 +0000 UTC m=+0.196603788 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:01:23 localhost podman[99839]: 2025-10-14 09:01:23.578758404 +0000 UTC m=+0.104497017 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public) Oct 14 05:01:23 localhost podman[99833]: 2025-10-14 09:01:23.708154286 +0000 UTC m=+0.237985258 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, batch=17.1_20250721.1) Oct 14 05:01:23 localhost podman[99833]: unhealthy Oct 14 05:01:23 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:01:23 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:01:23 localhost podman[99839]: 2025-10-14 09:01:23.762599957 +0000 UTC m=+0.288338610 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, release=1, batch=17.1_20250721.1, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:01:23 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:01:23 localhost podman[99832]: 2025-10-14 09:01:23.987117774 +0000 UTC m=+0.522955927 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 14 05:01:23 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:01:28 localhost podman[99919]: 2025-10-14 09:01:28.548791614 +0000 UTC m=+0.082724742 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr) Oct 14 05:01:28 localhost podman[99919]: 2025-10-14 09:01:28.744109926 +0000 UTC m=+0.278043124 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:01:28 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:01:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:01:42 localhost recover_tripleo_nova_virtqemud[99950]: 62532 Oct 14 05:01:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:01:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:01:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:01:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:01:43 localhost podman[99952]: 2025-10-14 09:01:43.556681827 +0000 UTC m=+0.090106319 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, io.buildah.version=1.33.12) Oct 14 05:01:43 localhost podman[99952]: 2025-10-14 09:01:43.565129974 +0000 UTC m=+0.098554486 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 05:01:43 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:01:43 localhost podman[99951]: 2025-10-14 09:01:43.621129457 +0000 UTC m=+0.155001681 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 14 05:01:43 localhost podman[99951]: 2025-10-14 09:01:43.659401645 +0000 UTC m=+0.193273869 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:01:43 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:01:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:01:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:01:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:01:52 localhost podman[100069]: 2025-10-14 09:01:52.538879467 +0000 UTC m=+0.073961987 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, build-date=2025-07-21T15:29:47) Oct 14 05:01:52 localhost podman[100067]: 2025-10-14 09:01:52.617013233 +0000 UTC m=+0.154905759 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute) Oct 14 05:01:52 localhost podman[100069]: 2025-10-14 09:01:52.622226873 +0000 UTC m=+0.157309373 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 05:01:52 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:01:52 localhost podman[100067]: 2025-10-14 09:01:52.645887429 +0000 UTC m=+0.183779955 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:01:52 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:01:52 localhost podman[100068]: 2025-10-14 09:01:52.56845524 +0000 UTC m=+0.103633273 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, release=1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12) Oct 14 05:01:52 localhost podman[100068]: 2025-10-14 09:01:52.702149079 +0000 UTC m=+0.237327102 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, architecture=x86_64, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 05:01:52 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:01:53 localhost systemd[1]: tmp-crun.gUH1jk.mount: Deactivated successfully. Oct 14 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:01:54 localhost podman[100141]: 2025-10-14 09:01:54.590865584 +0000 UTC m=+0.125295116 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 14 05:01:54 localhost podman[100140]: 2025-10-14 09:01:54.603226165 +0000 UTC m=+0.140462992 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1) Oct 14 05:01:54 localhost podman[100140]: 2025-10-14 09:01:54.616989925 +0000 UTC m=+0.154226762 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:01:54 localhost podman[100140]: unhealthy Oct 14 05:01:54 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:01:54 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:01:54 localhost podman[100142]: 2025-10-14 09:01:54.662529927 +0000 UTC m=+0.192105998 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 14 05:01:54 localhost podman[100142]: 2025-10-14 09:01:54.704971195 +0000 UTC m=+0.234547266 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Oct 14 05:01:54 localhost podman[100142]: unhealthy Oct 14 05:01:54 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:01:54 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:01:54 localhost podman[100143]: 2025-10-14 09:01:54.716600047 +0000 UTC m=+0.242582052 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:01:54 localhost podman[100143]: 2025-10-14 09:01:54.772025415 +0000 UTC m=+0.298007460 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 14 05:01:54 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:01:54 localhost podman[100141]: 2025-10-14 09:01:54.991173407 +0000 UTC m=+0.525602979 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:01:55 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:01:59 localhost podman[100226]: 2025-10-14 09:01:59.541200574 +0000 UTC m=+0.080830661 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team) Oct 14 05:01:59 localhost podman[100226]: 2025-10-14 09:01:59.764180909 +0000 UTC m=+0.303810986 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:01:59 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:02:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:02:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:02:14 localhost podman[100255]: 2025-10-14 09:02:14.546834776 +0000 UTC m=+0.089329419 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:02:14 localhost podman[100255]: 2025-10-14 09:02:14.560184764 +0000 UTC m=+0.102679427 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 05:02:14 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:02:14 localhost podman[100256]: 2025-10-14 09:02:14.648837474 +0000 UTC m=+0.188585303 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 05:02:14 localhost podman[100256]: 2025-10-14 09:02:14.660064555 +0000 UTC m=+0.199812384 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 05:02:14 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:02:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:02:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:02:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:02:23 localhost podman[100294]: 2025-10-14 09:02:23.549655077 +0000 UTC m=+0.091219218 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true) Oct 14 05:02:23 localhost podman[100295]: 2025-10-14 09:02:23.599549537 +0000 UTC m=+0.138318903 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, container_name=logrotate_crond, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4) Oct 14 05:02:23 localhost podman[100295]: 2025-10-14 09:02:23.636255942 +0000 UTC m=+0.175025338 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9) Oct 14 05:02:23 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:02:23 localhost podman[100296]: 2025-10-14 09:02:23.653904236 +0000 UTC m=+0.189339373 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 14 05:02:23 localhost podman[100294]: 2025-10-14 09:02:23.678322631 +0000 UTC m=+0.219886802 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, architecture=x86_64, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 14 05:02:23 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:02:23 localhost podman[100296]: 2025-10-14 09:02:23.737198341 +0000 UTC m=+0.272633518 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 05:02:23 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:02:25 localhost systemd[1]: tmp-crun.WHzBhM.mount: Deactivated successfully. Oct 14 05:02:25 localhost podman[100368]: 2025-10-14 09:02:25.550300047 +0000 UTC m=+0.086680968 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:02:25 localhost systemd[1]: tmp-crun.GFIgcH.mount: Deactivated successfully. Oct 14 05:02:25 localhost podman[100369]: 2025-10-14 09:02:25.604004198 +0000 UTC m=+0.135722934 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:02:25 localhost podman[100370]: 2025-10-14 09:02:25.566625674 +0000 UTC m=+0.096026858 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step5, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public) Oct 14 05:02:25 localhost podman[100367]: 2025-10-14 09:02:25.590477675 +0000 UTC m=+0.128338966 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:02:25 localhost podman[100369]: 2025-10-14 09:02:25.646079888 +0000 UTC m=+0.177798654 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible) Oct 14 05:02:25 localhost podman[100369]: unhealthy Oct 14 05:02:25 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:02:25 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:02:25 localhost podman[100367]: 2025-10-14 09:02:25.673069821 +0000 UTC m=+0.210931112 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 05:02:25 localhost podman[100367]: unhealthy Oct 14 05:02:25 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:02:25 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:02:25 localhost podman[100370]: 2025-10-14 09:02:25.698652188 +0000 UTC m=+0.228053362 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:02:25 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:02:25 localhost podman[100368]: 2025-10-14 09:02:25.919090665 +0000 UTC m=+0.455471566 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, architecture=x86_64, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12) Oct 14 05:02:25 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:02:30 localhost systemd[1]: tmp-crun.kbNKVL.mount: Deactivated successfully. Oct 14 05:02:30 localhost podman[100451]: 2025-10-14 09:02:30.550930427 +0000 UTC m=+0.086835542 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59) Oct 14 05:02:30 localhost podman[100451]: 2025-10-14 09:02:30.775142995 +0000 UTC m=+0.311048120 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 14 05:02:30 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5654 writes, 25K keys, 5654 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5654 writes, 706 syncs, 8.01 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 8 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:02:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:02:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:02:45 localhost podman[100481]: 2025-10-14 09:02:45.538807752 +0000 UTC m=+0.080319638 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, release=2, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team) Oct 14 05:02:45 localhost podman[100481]: 2025-10-14 09:02:45.550247339 +0000 UTC m=+0.091759245 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=) Oct 14 05:02:45 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:02:45 localhost podman[100482]: 2025-10-14 09:02:45.592159244 +0000 UTC m=+0.131691816 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Oct 14 05:02:45 localhost podman[100482]: 2025-10-14 09:02:45.632077725 +0000 UTC m=+0.171610677 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1) Oct 14 05:02:45 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4835 writes, 21K keys, 4835 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4835 writes, 657 syncs, 7.36 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 8 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:02:54 localhost podman[100597]: 2025-10-14 09:02:54.559318099 +0000 UTC m=+0.094739354 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:02:54 localhost podman[100596]: 2025-10-14 09:02:54.613470632 +0000 UTC m=+0.149500403 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, release=1, vendor=Red Hat, Inc.) Oct 14 05:02:54 localhost podman[100596]: 2025-10-14 09:02:54.644951077 +0000 UTC m=+0.180980868 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute) Oct 14 05:02:54 localhost podman[100598]: 2025-10-14 09:02:54.657919425 +0000 UTC m=+0.191075559 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 05:02:54 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:02:54 localhost podman[100597]: 2025-10-14 09:02:54.676600117 +0000 UTC m=+0.212021332 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, release=1) Oct 14 05:02:54 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:02:54 localhost podman[100598]: 2025-10-14 09:02:54.718223544 +0000 UTC m=+0.251379688 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47) Oct 14 05:02:54 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:02:56 localhost podman[100666]: 2025-10-14 09:02:56.545779187 +0000 UTC m=+0.078637782 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:02:56 localhost podman[100666]: 2025-10-14 09:02:56.559747922 +0000 UTC m=+0.092606487 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 05:02:56 localhost podman[100666]: unhealthy Oct 14 05:02:56 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:02:56 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:02:56 localhost podman[100664]: 2025-10-14 09:02:56.60214816 +0000 UTC m=+0.136024952 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, tcib_managed=true) Oct 14 05:02:56 localhost systemd[1]: tmp-crun.MG2YLl.mount: Deactivated successfully. Oct 14 05:02:56 localhost podman[100664]: 2025-10-14 09:02:56.649814329 +0000 UTC m=+0.183691111 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 05:02:56 localhost podman[100664]: unhealthy Oct 14 05:02:56 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:02:56 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:02:56 localhost podman[100665]: 2025-10-14 09:02:56.653639182 +0000 UTC m=+0.185823439 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4) Oct 14 05:02:56 localhost podman[100667]: 2025-10-14 09:02:56.716422067 +0000 UTC m=+0.240653600 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_compute) Oct 14 05:02:56 localhost podman[100667]: 2025-10-14 09:02:56.766032018 +0000 UTC m=+0.290263511 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:02:56 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:02:57 localhost podman[100665]: 2025-10-14 09:02:57.035210984 +0000 UTC m=+0.567395271 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:02:57 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:03:01 localhost systemd[1]: tmp-crun.uwZJjk.mount: Deactivated successfully. Oct 14 05:03:01 localhost podman[100749]: 2025-10-14 09:03:01.532982298 +0000 UTC m=+0.072310652 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 14 05:03:01 localhost podman[100749]: 2025-10-14 09:03:01.726158624 +0000 UTC m=+0.265486998 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:07:59) Oct 14 05:03:01 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:03:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:03:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:03:16 localhost podman[100778]: 2025-10-14 09:03:16.600500311 +0000 UTC m=+0.140401539 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-collectd, release=2, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:03:16 localhost podman[100778]: 2025-10-14 09:03:16.615104413 +0000 UTC m=+0.155005621 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, release=2, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:03:16 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:03:16 localhost podman[100779]: 2025-10-14 09:03:16.71220269 +0000 UTC m=+0.250228748 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:03:16 localhost podman[100779]: 2025-10-14 09:03:16.725113526 +0000 UTC m=+0.263139614 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 14 05:03:16 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:03:25 localhost podman[100819]: 2025-10-14 09:03:25.547541658 +0000 UTC m=+0.079778963 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47) Oct 14 05:03:25 localhost podman[100817]: 2025-10-14 09:03:25.598434593 +0000 UTC m=+0.136862834 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc.) Oct 14 05:03:25 localhost podman[100818]: 2025-10-14 09:03:25.653996275 +0000 UTC m=+0.189937249 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:03:25 localhost podman[100818]: 2025-10-14 09:03:25.667321302 +0000 UTC m=+0.203262316 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, container_name=logrotate_crond) Oct 14 05:03:25 localhost podman[100819]: 2025-10-14 09:03:25.677585218 +0000 UTC m=+0.209822523 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public) Oct 14 05:03:25 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:03:25 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:03:25 localhost podman[100817]: 2025-10-14 09:03:25.706085042 +0000 UTC m=+0.244513313 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4) Oct 14 05:03:25 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:03:27 localhost podman[100893]: 2025-10-14 09:03:27.560751423 +0000 UTC m=+0.090582572 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:03:27 localhost systemd[1]: tmp-crun.QliAWk.mount: Deactivated successfully. Oct 14 05:03:27 localhost podman[100893]: 2025-10-14 09:03:27.614061604 +0000 UTC m=+0.143892763 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, release=1) Oct 14 05:03:27 localhost podman[100893]: unhealthy Oct 14 05:03:27 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:03:27 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:03:27 localhost podman[100894]: 2025-10-14 09:03:27.614959208 +0000 UTC m=+0.140936484 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, release=1, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true) Oct 14 05:03:27 localhost podman[100892]: 2025-10-14 09:03:27.680428316 +0000 UTC m=+0.215477755 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 14 05:03:27 localhost podman[100894]: 2025-10-14 09:03:27.703976577 +0000 UTC m=+0.229953833 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, architecture=x86_64) Oct 14 05:03:27 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:03:27 localhost podman[100891]: 2025-10-14 09:03:27.72346067 +0000 UTC m=+0.258300963 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 05:03:27 localhost podman[100891]: 2025-10-14 09:03:27.73797023 +0000 UTC m=+0.272810493 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git) Oct 14 05:03:27 localhost podman[100891]: unhealthy Oct 14 05:03:27 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:03:27 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:03:28 localhost podman[100892]: 2025-10-14 09:03:28.070355751 +0000 UTC m=+0.605405220 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 14 05:03:28 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:03:28 localhost systemd[1]: tmp-crun.9fIS1t.mount: Deactivated successfully. Oct 14 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:03:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:03:32 localhost podman[100980]: 2025-10-14 09:03:32.565703981 +0000 UTC m=+0.105063751 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_step1) Oct 14 05:03:32 localhost recover_tripleo_nova_virtqemud[101003]: 62532 Oct 14 05:03:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:03:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:03:32 localhost podman[100980]: 2025-10-14 09:03:32.74228118 +0000 UTC m=+0.281640980 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd) Oct 14 05:03:32 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:03:47 localhost podman[101011]: 2025-10-14 09:03:47.559428682 +0000 UTC m=+0.090487799 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12) Oct 14 05:03:47 localhost podman[101011]: 2025-10-14 09:03:47.567795707 +0000 UTC m=+0.098854854 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, container_name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, version=17.1.9) Oct 14 05:03:47 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:03:47 localhost systemd[1]: tmp-crun.r1YBcc.mount: Deactivated successfully. Oct 14 05:03:47 localhost podman[101012]: 2025-10-14 09:03:47.65586621 +0000 UTC m=+0.183200718 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, com.redhat.component=openstack-iscsid-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc.) Oct 14 05:03:47 localhost podman[101012]: 2025-10-14 09:03:47.664715189 +0000 UTC m=+0.192049737 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container) Oct 14 05:03:47 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:03:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:03:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:03:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:03:56 localhost podman[101129]: 2025-10-14 09:03:56.557359369 +0000 UTC m=+0.086010529 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public) Oct 14 05:03:56 localhost systemd[1]: tmp-crun.qs3f2n.mount: Deactivated successfully. Oct 14 05:03:56 localhost podman[101128]: 2025-10-14 09:03:56.608610215 +0000 UTC m=+0.143277867 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 05:03:56 localhost podman[101128]: 2025-10-14 09:03:56.616767824 +0000 UTC m=+0.151435486 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:03:56 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:03:56 localhost podman[101129]: 2025-10-14 09:03:56.667704131 +0000 UTC m=+0.196355361 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 05:03:56 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:03:56 localhost podman[101127]: 2025-10-14 09:03:56.767573202 +0000 UTC m=+0.301692939 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9) Oct 14 05:03:56 localhost podman[101127]: 2025-10-14 09:03:56.822506907 +0000 UTC m=+0.356626614 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 14 05:03:56 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:03:58 localhost podman[101199]: 2025-10-14 09:03:58.546803318 +0000 UTC m=+0.087988892 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:03:58 localhost systemd[1]: tmp-crun.Cr6uGY.mount: Deactivated successfully. Oct 14 05:03:58 localhost podman[101199]: 2025-10-14 09:03:58.606305755 +0000 UTC m=+0.147491299 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 14 05:03:58 localhost podman[101199]: unhealthy Oct 14 05:03:58 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:03:58 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:03:58 localhost podman[101200]: 2025-10-14 09:03:58.650614024 +0000 UTC m=+0.187832113 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=nova_migration_target, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, vcs-type=git) Oct 14 05:03:58 localhost podman[101202]: 2025-10-14 09:03:58.610831907 +0000 UTC m=+0.144345546 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, version=17.1.9, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64) Oct 14 05:03:58 localhost podman[101202]: 2025-10-14 09:03:58.690747462 +0000 UTC m=+0.224261041 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, container_name=nova_compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:03:58 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:03:58 localhost podman[101201]: 2025-10-14 09:03:58.704869471 +0000 UTC m=+0.240796424 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1) Oct 14 05:03:58 localhost podman[101201]: 2025-10-14 09:03:58.718942359 +0000 UTC m=+0.254869292 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1) Oct 14 05:03:58 localhost podman[101201]: unhealthy Oct 14 05:03:58 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:03:58 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:03:59 localhost podman[101200]: 2025-10-14 09:03:59.019501586 +0000 UTC m=+0.556719674 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:03:59 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:04:03 localhost podman[101288]: 2025-10-14 09:04:03.515849043 +0000 UTC m=+0.060621539 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 05:04:03 localhost podman[101288]: 2025-10-14 09:04:03.716407996 +0000 UTC m=+0.261180562 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 14 05:04:03 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:04:18 localhost podman[101318]: 2025-10-14 09:04:18.54362388 +0000 UTC m=+0.081142479 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 05:04:18 localhost podman[101318]: 2025-10-14 09:04:18.559166407 +0000 UTC m=+0.096685066 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, version=17.1.9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 05:04:18 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:04:18 localhost podman[101317]: 2025-10-14 09:04:18.647148649 +0000 UTC m=+0.185373667 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, version=17.1.9, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12) Oct 14 05:04:18 localhost podman[101317]: 2025-10-14 09:04:18.656813988 +0000 UTC m=+0.195038996 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=collectd, release=2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 14 05:04:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:04:27 localhost podman[101355]: 2025-10-14 09:04:27.541313155 +0000 UTC m=+0.078120628 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33) Oct 14 05:04:27 localhost podman[101356]: 2025-10-14 09:04:27.60666629 +0000 UTC m=+0.139361252 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, config_id=tripleo_step4) Oct 14 05:04:27 localhost podman[101356]: 2025-10-14 09:04:27.615496387 +0000 UTC m=+0.148191349 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12) Oct 14 05:04:27 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:04:27 localhost podman[101355]: 2025-10-14 09:04:27.67002632 +0000 UTC m=+0.206833833 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, tcib_managed=true) Oct 14 05:04:27 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:04:27 localhost podman[101357]: 2025-10-14 09:04:27.757333213 +0000 UTC m=+0.288420072 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 05:04:27 localhost podman[101357]: 2025-10-14 09:04:27.814662142 +0000 UTC m=+0.345749051 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc.) Oct 14 05:04:27 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:04:29 localhost podman[101431]: 2025-10-14 09:04:29.54253741 +0000 UTC m=+0.078746985 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 14 05:04:29 localhost systemd[1]: tmp-crun.RBzDC6.mount: Deactivated successfully. Oct 14 05:04:29 localhost podman[101428]: 2025-10-14 09:04:29.589564231 +0000 UTC m=+0.131538381 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-type=git, tcib_managed=true, release=1, version=17.1.9, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:04:29 localhost podman[101428]: 2025-10-14 09:04:29.60513193 +0000 UTC m=+0.147106070 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:28:44) Oct 14 05:04:29 localhost podman[101428]: unhealthy Oct 14 05:04:29 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:04:29 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:04:29 localhost podman[101430]: 2025-10-14 09:04:29.565026954 +0000 UTC m=+0.100080027 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:04:29 localhost podman[101431]: 2025-10-14 09:04:29.643471749 +0000 UTC m=+0.179681344 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 05:04:29 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:04:29 localhost podman[101430]: 2025-10-14 09:04:29.695766352 +0000 UTC m=+0.230819415 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, version=17.1.9, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 05:04:29 localhost podman[101430]: unhealthy Oct 14 05:04:29 localhost podman[101429]: 2025-10-14 09:04:29.703477299 +0000 UTC m=+0.240840675 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:04:29 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:04:29 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:04:30 localhost podman[101429]: 2025-10-14 09:04:30.070142421 +0000 UTC m=+0.607505807 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:04:30 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:04:34 localhost podman[101517]: 2025-10-14 09:04:34.544437365 +0000 UTC m=+0.082616298 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64) Oct 14 05:04:34 localhost podman[101517]: 2025-10-14 09:04:34.749374955 +0000 UTC m=+0.287553928 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1, distribution-scope=public, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 14 05:04:34 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:04:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:04:42 localhost recover_tripleo_nova_virtqemud[101547]: 62532 Oct 14 05:04:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:04:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:04:49 localhost podman[101549]: 2025-10-14 09:04:49.548012691 +0000 UTC m=+0.086648006 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc.) Oct 14 05:04:49 localhost podman[101549]: 2025-10-14 09:04:49.557471485 +0000 UTC m=+0.096106750 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9) Oct 14 05:04:49 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:04:49 localhost podman[101548]: 2025-10-14 09:04:49.656662308 +0000 UTC m=+0.196448624 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, config_id=tripleo_step3, release=2, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 14 05:04:49 localhost podman[101548]: 2025-10-14 09:04:49.667501538 +0000 UTC m=+0.207287794 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, name=rhosp17/openstack-collectd, tcib_managed=true, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, container_name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 14 05:04:49 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:04:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:04:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:04:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:04:58 localhost systemd[1]: tmp-crun.o8Hflf.mount: Deactivated successfully. Oct 14 05:04:58 localhost podman[101716]: 2025-10-14 09:04:58.558129019 +0000 UTC m=+0.092955575 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 14 05:04:58 localhost podman[101715]: 2025-10-14 09:04:58.606888079 +0000 UTC m=+0.142256470 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9) Oct 14 05:04:58 localhost podman[101716]: 2025-10-14 09:04:58.619538978 +0000 UTC m=+0.154365544 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:04:58 localhost podman[101715]: 2025-10-14 09:04:58.643074679 +0000 UTC m=+0.178443090 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:04:58 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:04:58 localhost podman[101717]: 2025-10-14 09:04:58.655177435 +0000 UTC m=+0.185816390 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:29:47, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, vcs-type=git, release=1) Oct 14 05:04:58 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:04:58 localhost podman[101717]: 2025-10-14 09:04:58.692020433 +0000 UTC m=+0.222659388 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 14 05:04:58 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:05:00 localhost podman[101787]: 2025-10-14 09:05:00.55512255 +0000 UTC m=+0.096972394 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, distribution-scope=public, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 14 05:05:00 localhost podman[101790]: 2025-10-14 09:05:00.581685083 +0000 UTC m=+0.114936166 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git) Oct 14 05:05:00 localhost podman[101787]: 2025-10-14 09:05:00.598211306 +0000 UTC m=+0.140061150 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, release=1, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-ovn-controller) Oct 14 05:05:00 localhost podman[101787]: unhealthy Oct 14 05:05:00 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:05:00 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:05:00 localhost podman[101790]: 2025-10-14 09:05:00.611090212 +0000 UTC m=+0.144341225 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, tcib_managed=true) Oct 14 05:05:00 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:05:00 localhost podman[101788]: 2025-10-14 09:05:00.599161873 +0000 UTC m=+0.137844761 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team) Oct 14 05:05:00 localhost podman[101789]: 2025-10-14 09:05:00.700235555 +0000 UTC m=+0.236462628 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 14 05:05:00 localhost podman[101789]: 2025-10-14 09:05:00.740588418 +0000 UTC m=+0.276815531 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9) Oct 14 05:05:00 localhost podman[101789]: unhealthy Oct 14 05:05:00 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:05:00 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:05:00 localhost podman[101788]: 2025-10-14 09:05:00.993403404 +0000 UTC m=+0.532086292 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1) Oct 14 05:05:01 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:05:05 localhost podman[101873]: 2025-10-14 09:05:05.550357925 +0000 UTC m=+0.090470289 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.9, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:05:05 localhost podman[101873]: 2025-10-14 09:05:05.748159475 +0000 UTC m=+0.288271799 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:05:05 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:05:20 localhost systemd[1]: tmp-crun.pZa2Iy.mount: Deactivated successfully. Oct 14 05:05:20 localhost podman[101903]: 2025-10-14 09:05:20.561054363 +0000 UTC m=+0.098826923 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9) Oct 14 05:05:20 localhost podman[101902]: 2025-10-14 09:05:20.530240236 +0000 UTC m=+0.072458165 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:05:20 localhost podman[101903]: 2025-10-14 09:05:20.600182993 +0000 UTC m=+0.137955513 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git) Oct 14 05:05:20 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:05:20 localhost podman[101902]: 2025-10-14 09:05:20.614371594 +0000 UTC m=+0.156589473 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:04:03) Oct 14 05:05:20 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:05:29 localhost podman[101943]: 2025-10-14 09:05:29.543357817 +0000 UTC m=+0.087164091 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:05:29 localhost podman[101948]: 2025-10-14 09:05:29.563381044 +0000 UTC m=+0.096871711 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1) Oct 14 05:05:29 localhost podman[101943]: 2025-10-14 09:05:29.583114584 +0000 UTC m=+0.126920828 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:05:29 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:05:29 localhost podman[101944]: 2025-10-14 09:05:29.626246561 +0000 UTC m=+0.161249018 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Oct 14 05:05:29 localhost podman[101948]: 2025-10-14 09:05:29.646715931 +0000 UTC m=+0.180206628 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vcs-type=git, batch=17.1_20250721.1) Oct 14 05:05:29 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:05:29 localhost podman[101944]: 2025-10-14 09:05:29.667768546 +0000 UTC m=+0.202771003 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12) Oct 14 05:05:29 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:05:31 localhost systemd[1]: tmp-crun.SiRAHC.mount: Deactivated successfully. Oct 14 05:05:31 localhost podman[102015]: 2025-10-14 09:05:31.561185476 +0000 UTC m=+0.099570152 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44) Oct 14 05:05:31 localhost podman[102015]: 2025-10-14 09:05:31.605185978 +0000 UTC m=+0.143570594 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:28:44) Oct 14 05:05:31 localhost podman[102015]: unhealthy Oct 14 05:05:31 localhost systemd[1]: tmp-crun.hoJ6FS.mount: Deactivated successfully. Oct 14 05:05:31 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:05:31 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:05:31 localhost podman[102016]: 2025-10-14 09:05:31.616995984 +0000 UTC m=+0.152051161 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 05:05:31 localhost podman[102018]: 2025-10-14 09:05:31.662812474 +0000 UTC m=+0.191566832 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 05:05:31 localhost podman[102017]: 2025-10-14 09:05:31.714172983 +0000 UTC m=+0.244155984 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 14 05:05:31 localhost podman[102018]: 2025-10-14 09:05:31.741667321 +0000 UTC m=+0.270421629 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:05:31 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:05:31 localhost podman[102017]: 2025-10-14 09:05:31.758098021 +0000 UTC m=+0.288081052 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 14 05:05:31 localhost podman[102017]: unhealthy Oct 14 05:05:31 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:05:31 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:05:32 localhost podman[102016]: 2025-10-14 09:05:32.004240418 +0000 UTC m=+0.539295555 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Oct 14 05:05:32 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:05:36 localhost podman[102097]: 2025-10-14 09:05:36.534830373 +0000 UTC m=+0.074043599 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step1, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:05:36 localhost podman[102097]: 2025-10-14 09:05:36.706453279 +0000 UTC m=+0.245666535 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1) Oct 14 05:05:36 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:05:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:05:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:05:51 localhost podman[102127]: 2025-10-14 09:05:51.549055156 +0000 UTC m=+0.086550944 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, vcs-type=git, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:05:51 localhost podman[102127]: 2025-10-14 09:05:51.561093739 +0000 UTC m=+0.098589517 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=) Oct 14 05:05:51 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:05:51 localhost podman[102126]: 2025-10-14 09:05:51.653610812 +0000 UTC m=+0.191257114 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 05:05:51 localhost podman[102126]: 2025-10-14 09:05:51.692189018 +0000 UTC m=+0.229835380 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step3, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public) Oct 14 05:05:51 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:06:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:06:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:06:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:06:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:06:00 localhost recover_tripleo_nova_virtqemud[102261]: 62532 Oct 14 05:06:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:06:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:06:00 localhost systemd[1]: tmp-crun.tOqQaa.mount: Deactivated successfully. Oct 14 05:06:00 localhost podman[102243]: 2025-10-14 09:06:00.559311298 +0000 UTC m=+0.091352812 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi) Oct 14 05:06:00 localhost podman[102241]: 2025-10-14 09:06:00.611937141 +0000 UTC m=+0.148024524 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 14 05:06:00 localhost podman[102243]: 2025-10-14 09:06:00.623386188 +0000 UTC m=+0.155427692 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:06:00 localhost podman[102241]: 2025-10-14 09:06:00.642111671 +0000 UTC m=+0.178199064 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:06:00 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:06:00 localhost podman[102242]: 2025-10-14 09:06:00.666905917 +0000 UTC m=+0.199898926 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1) Oct 14 05:06:00 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:06:00 localhost podman[102242]: 2025-10-14 09:06:00.705006229 +0000 UTC m=+0.237999238 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, build-date=2025-07-21T13:07:52, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron) Oct 14 05:06:00 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:06:02 localhost systemd[1]: tmp-crun.AIvLMs.mount: Deactivated successfully. Oct 14 05:06:02 localhost podman[102315]: 2025-10-14 09:06:02.523246723 +0000 UTC m=+0.069737693 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller) Oct 14 05:06:02 localhost podman[102323]: 2025-10-14 09:06:02.587820676 +0000 UTC m=+0.125249853 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37) Oct 14 05:06:02 localhost podman[102315]: 2025-10-14 09:06:02.607056102 +0000 UTC m=+0.153547082 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, container_name=ovn_controller, release=1) Oct 14 05:06:02 localhost podman[102315]: unhealthy Oct 14 05:06:02 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:06:02 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:06:02 localhost podman[102323]: 2025-10-14 09:06:02.644958529 +0000 UTC m=+0.182387626 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-nova-compute) Oct 14 05:06:02 localhost podman[102316]: 2025-10-14 09:06:02.653598551 +0000 UTC m=+0.193024492 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4) Oct 14 05:06:02 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:06:02 localhost podman[102317]: 2025-10-14 09:06:02.561796547 +0000 UTC m=+0.099094360 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:06:02 localhost podman[102317]: 2025-10-14 09:06:02.691835978 +0000 UTC m=+0.229133851 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true) Oct 14 05:06:02 localhost podman[102317]: unhealthy Oct 14 05:06:02 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:06:02 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:06:03 localhost podman[102316]: 2025-10-14 09:06:03.042097059 +0000 UTC m=+0.581523010 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:06:03 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:06:07 localhost systemd[1]: tmp-crun.vqPpov.mount: Deactivated successfully. Oct 14 05:06:07 localhost podman[102402]: 2025-10-14 09:06:07.554823935 +0000 UTC m=+0.098023692 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Oct 14 05:06:07 localhost podman[102402]: 2025-10-14 09:06:07.744791733 +0000 UTC m=+0.287991430 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1, vcs-type=git, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 05:06:07 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:06:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:06:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:06:22 localhost systemd[1]: tmp-crun.J427q0.mount: Deactivated successfully. Oct 14 05:06:22 localhost podman[102432]: 2025-10-14 09:06:22.553826429 +0000 UTC m=+0.095410452 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1) Oct 14 05:06:22 localhost podman[102432]: 2025-10-14 09:06:22.564973409 +0000 UTC m=+0.106557392 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 05:06:22 localhost podman[102431]: 2025-10-14 09:06:22.601619432 +0000 UTC m=+0.143925545 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:04:03, container_name=collectd, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Oct 14 05:06:22 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:06:22 localhost podman[102431]: 2025-10-14 09:06:22.639250382 +0000 UTC m=+0.181556415 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, container_name=collectd, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=2) Oct 14 05:06:22 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:06:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:06:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:06:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:06:31 localhost podman[102470]: 2025-10-14 09:06:31.545084329 +0000 UTC m=+0.087495309 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 14 05:06:31 localhost podman[102470]: 2025-10-14 09:06:31.57606217 +0000 UTC m=+0.118473170 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Oct 14 05:06:31 localhost podman[102471]: 2025-10-14 09:06:31.593071497 +0000 UTC m=+0.131773128 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:06:31 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:06:31 localhost podman[102471]: 2025-10-14 09:06:31.604007531 +0000 UTC m=+0.142709172 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, architecture=x86_64, release=1) Oct 14 05:06:31 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:06:31 localhost podman[102472]: 2025-10-14 09:06:31.65949944 +0000 UTC m=+0.195132308 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:29:47, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12) Oct 14 05:06:31 localhost podman[102472]: 2025-10-14 09:06:31.710958082 +0000 UTC m=+0.246590950 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9) Oct 14 05:06:31 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:06:32 localhost systemd[1]: tmp-crun.KlGyBS.mount: Deactivated successfully. Oct 14 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:06:33 localhost podman[102541]: 2025-10-14 09:06:33.554496754 +0000 UTC m=+0.089805991 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, container_name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 05:06:33 localhost podman[102543]: 2025-10-14 09:06:33.587976762 +0000 UTC m=+0.111628287 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9) Oct 14 05:06:33 localhost podman[102542]: 2025-10-14 09:06:33.62477209 +0000 UTC m=+0.154975411 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_metadata_agent) Oct 14 05:06:33 localhost podman[102540]: 2025-10-14 09:06:33.663662214 +0000 UTC m=+0.200558724 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller) Oct 14 05:06:33 localhost podman[102542]: 2025-10-14 09:06:33.687826422 +0000 UTC m=+0.218029723 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64) Oct 14 05:06:33 localhost podman[102542]: unhealthy Oct 14 05:06:33 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:06:33 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:06:33 localhost podman[102540]: 2025-10-14 09:06:33.708860237 +0000 UTC m=+0.245756737 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.9, distribution-scope=public) Oct 14 05:06:33 localhost podman[102540]: unhealthy Oct 14 05:06:33 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:06:33 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:06:33 localhost podman[102543]: 2025-10-14 09:06:33.76861247 +0000 UTC m=+0.292264015 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, release=1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 14 05:06:33 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:06:33 localhost podman[102541]: 2025-10-14 09:06:33.921197736 +0000 UTC m=+0.456506933 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 05:06:33 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:06:34 localhost systemd[1]: tmp-crun.j5stwf.mount: Deactivated successfully. Oct 14 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:06:38 localhost podman[102625]: 2025-10-14 09:06:38.547669793 +0000 UTC m=+0.082345661 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc.) Oct 14 05:06:38 localhost podman[102625]: 2025-10-14 09:06:38.771630614 +0000 UTC m=+0.306306502 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, version=17.1.9, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 14 05:06:38 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:06:53 localhost podman[102657]: 2025-10-14 09:06:53.56018089 +0000 UTC m=+0.096656874 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:06:53 localhost podman[102657]: 2025-10-14 09:06:53.56948381 +0000 UTC m=+0.105959804 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Oct 14 05:06:53 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:06:53 localhost systemd[1]: tmp-crun.d8anp4.mount: Deactivated successfully. Oct 14 05:06:53 localhost podman[102658]: 2025-10-14 09:06:53.656075234 +0000 UTC m=+0.189416924 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 05:06:53 localhost podman[102658]: 2025-10-14 09:06:53.695254106 +0000 UTC m=+0.228595766 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, release=1, architecture=x86_64, build-date=2025-07-21T13:27:15, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:06:53 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:06:56 localhost podman[102802]: 2025-10-14 09:06:56.616389811 +0000 UTC m=+0.079006051 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, architecture=x86_64, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 05:06:56 localhost podman[102802]: 2025-10-14 09:06:56.737562774 +0000 UTC m=+0.200179014 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, release=553, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vcs-type=git) Oct 14 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:07:02 localhost podman[102945]: 2025-10-14 09:07:02.559092458 +0000 UTC m=+0.092795772 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true) Oct 14 05:07:02 localhost systemd[1]: tmp-crun.YzCakr.mount: Deactivated successfully. Oct 14 05:07:02 localhost podman[102944]: 2025-10-14 09:07:02.64408465 +0000 UTC m=+0.181030881 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 14 05:07:02 localhost podman[102945]: 2025-10-14 09:07:02.658490696 +0000 UTC m=+0.192193970 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 14 05:07:02 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:07:02 localhost podman[102944]: 2025-10-14 09:07:02.709862255 +0000 UTC m=+0.246808476 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64) Oct 14 05:07:02 localhost podman[102943]: 2025-10-14 09:07:02.659540674 +0000 UTC m=+0.197401349 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:07:02 localhost podman[102943]: 2025-10-14 09:07:02.744408832 +0000 UTC m=+0.282269517 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 05:07:02 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:07:02 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:07:04 localhost podman[103015]: 2025-10-14 09:07:04.528895928 +0000 UTC m=+0.069718151 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:07:04 localhost systemd[1]: tmp-crun.8Fadx9.mount: Deactivated successfully. Oct 14 05:07:04 localhost podman[103014]: 2025-10-14 09:07:04.54643919 +0000 UTC m=+0.087291384 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_controller, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44) Oct 14 05:07:04 localhost podman[103014]: 2025-10-14 09:07:04.58629626 +0000 UTC m=+0.127148434 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1) Oct 14 05:07:04 localhost podman[103014]: unhealthy Oct 14 05:07:04 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:07:04 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:07:04 localhost podman[103016]: 2025-10-14 09:07:04.59935646 +0000 UTC m=+0.135668913 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Oct 14 05:07:04 localhost podman[103016]: 2025-10-14 09:07:04.617113257 +0000 UTC m=+0.153425690 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:07:04 localhost podman[103016]: unhealthy Oct 14 05:07:04 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:07:04 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:07:04 localhost podman[103017]: 2025-10-14 09:07:04.667764826 +0000 UTC m=+0.200322977 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:07:04 localhost podman[103017]: 2025-10-14 09:07:04.697168646 +0000 UTC m=+0.229726767 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1) Oct 14 05:07:04 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:07:04 localhost podman[103015]: 2025-10-14 09:07:04.882270844 +0000 UTC m=+0.423093067 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:07:04 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:07:09 localhost podman[103101]: 2025-10-14 09:07:09.545541029 +0000 UTC m=+0.087749355 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1) Oct 14 05:07:09 localhost podman[103101]: 2025-10-14 09:07:09.751078296 +0000 UTC m=+0.293286602 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1, version=17.1.9, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1) Oct 14 05:07:09 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:07:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:07:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:07:24 localhost podman[103133]: 2025-10-14 09:07:24.561645581 +0000 UTC m=+0.093463119 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 05:07:24 localhost podman[103132]: 2025-10-14 09:07:24.609251869 +0000 UTC m=+0.141930149 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1) Oct 14 05:07:24 localhost podman[103132]: 2025-10-14 09:07:24.624108269 +0000 UTC m=+0.156786579 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, release=2, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible) Oct 14 05:07:24 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:07:24 localhost podman[103133]: 2025-10-14 09:07:24.680527993 +0000 UTC m=+0.212345511 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, batch=17.1_20250721.1, release=1) Oct 14 05:07:24 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:07:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:07:32 localhost recover_tripleo_nova_virtqemud[103174]: 62532 Oct 14 05:07:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:07:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:07:33 localhost podman[103175]: 2025-10-14 09:07:33.55275798 +0000 UTC m=+0.091344663 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, architecture=x86_64, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:07:33 localhost systemd[1]: tmp-crun.mAp7FW.mount: Deactivated successfully. Oct 14 05:07:33 localhost podman[103177]: 2025-10-14 09:07:33.609212285 +0000 UTC m=+0.141972132 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 14 05:07:33 localhost podman[103176]: 2025-10-14 09:07:33.660640546 +0000 UTC m=+0.198330785 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc.) Oct 14 05:07:33 localhost podman[103176]: 2025-10-14 09:07:33.673269584 +0000 UTC m=+0.210959813 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, tcib_managed=true) Oct 14 05:07:33 localhost podman[103175]: 2025-10-14 09:07:33.684022403 +0000 UTC m=+0.222609106 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible) Oct 14 05:07:33 localhost podman[103177]: 2025-10-14 09:07:33.686341746 +0000 UTC m=+0.219101583 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 05:07:33 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:07:33 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:07:33 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:07:35 localhost systemd[1]: tmp-crun.w4xary.mount: Deactivated successfully. Oct 14 05:07:35 localhost podman[103250]: 2025-10-14 09:07:35.558294971 +0000 UTC m=+0.100374756 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:07:35 localhost podman[103254]: 2025-10-14 09:07:35.585153612 +0000 UTC m=+0.114473004 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, tcib_managed=true) Oct 14 05:07:35 localhost podman[103250]: 2025-10-14 09:07:35.643120887 +0000 UTC m=+0.185200652 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc.) Oct 14 05:07:35 localhost podman[103250]: unhealthy Oct 14 05:07:35 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:07:35 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:07:35 localhost podman[103252]: 2025-10-14 09:07:35.655843559 +0000 UTC m=+0.188695356 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:07:35 localhost podman[103251]: 2025-10-14 09:07:35.614096868 +0000 UTC m=+0.150913461 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:07:35 localhost podman[103254]: 2025-10-14 09:07:35.669471845 +0000 UTC m=+0.198791177 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 05:07:35 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:07:35 localhost podman[103252]: 2025-10-14 09:07:35.724034509 +0000 UTC m=+0.256886336 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, release=1, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9) Oct 14 05:07:35 localhost podman[103252]: unhealthy Oct 14 05:07:35 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:07:35 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:07:35 localhost podman[103251]: 2025-10-14 09:07:35.987302445 +0000 UTC m=+0.524119018 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team) Oct 14 05:07:35 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:07:40 localhost podman[103332]: 2025-10-14 09:07:40.552676298 +0000 UTC m=+0.096554386 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:07:40 localhost podman[103332]: 2025-10-14 09:07:40.773204712 +0000 UTC m=+0.317082780 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:59, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1) Oct 14 05:07:40 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:07:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:07:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:07:55 localhost systemd[1]: tmp-crun.JEPoKQ.mount: Deactivated successfully. Oct 14 05:07:55 localhost podman[103361]: 2025-10-14 09:07:55.558853239 +0000 UTC m=+0.094488969 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 14 05:07:55 localhost podman[103361]: 2025-10-14 09:07:55.568163519 +0000 UTC m=+0.103799299 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, release=2, version=17.1.9, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:07:55 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:07:55 localhost podman[103362]: 2025-10-14 09:07:55.6571275 +0000 UTC m=+0.188754283 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Oct 14 05:07:55 localhost podman[103362]: 2025-10-14 09:07:55.695155801 +0000 UTC m=+0.226782544 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, release=1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 05:07:55 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:08:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:08:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:08:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:08:04 localhost podman[103475]: 2025-10-14 09:08:04.566839612 +0000 UTC m=+0.097725077 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:08:04 localhost podman[103475]: 2025-10-14 09:08:04.606203539 +0000 UTC m=+0.137088964 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, release=1, name=rhosp17/openstack-cron, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git) Oct 14 05:08:04 localhost systemd[1]: tmp-crun.MtFw1o.mount: Deactivated successfully. Oct 14 05:08:04 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:08:04 localhost podman[103474]: 2025-10-14 09:08:04.624072399 +0000 UTC m=+0.160245866 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public) Oct 14 05:08:04 localhost podman[103474]: 2025-10-14 09:08:04.654027394 +0000 UTC m=+0.190200811 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible) Oct 14 05:08:04 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:08:04 localhost podman[103476]: 2025-10-14 09:08:04.669212742 +0000 UTC m=+0.199103311 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 14 05:08:04 localhost podman[103476]: 2025-10-14 09:08:04.695068857 +0000 UTC m=+0.224959426 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 14 05:08:04 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:08:06 localhost systemd[1]: tmp-crun.0fcQnp.mount: Deactivated successfully. Oct 14 05:08:06 localhost podman[103547]: 2025-10-14 09:08:06.553822842 +0000 UTC m=+0.095718492 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 14 05:08:06 localhost podman[103548]: 2025-10-14 09:08:06.621382687 +0000 UTC m=+0.158693174 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:08:06 localhost podman[103555]: 2025-10-14 09:08:06.589780668 +0000 UTC m=+0.120112918 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:08:06 localhost podman[103555]: 2025-10-14 09:08:06.670702122 +0000 UTC m=+0.201034372 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, distribution-scope=public, config_id=tripleo_step5, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1) Oct 14 05:08:06 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:08:06 localhost podman[103547]: 2025-10-14 09:08:06.683942038 +0000 UTC m=+0.225837678 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, tcib_managed=true, version=17.1.9, distribution-scope=public) Oct 14 05:08:06 localhost podman[103547]: unhealthy Oct 14 05:08:06 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:08:06 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:08:06 localhost podman[103549]: 2025-10-14 09:08:06.726561983 +0000 UTC m=+0.258986218 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 14 05:08:06 localhost podman[103549]: 2025-10-14 09:08:06.77111901 +0000 UTC m=+0.303543175 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:08:06 localhost podman[103549]: unhealthy Oct 14 05:08:06 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:08:06 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:08:06 localhost podman[103548]: 2025-10-14 09:08:06.968181914 +0000 UTC m=+0.505492411 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, architecture=x86_64, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 05:08:06 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:08:11 localhost podman[103630]: 2025-10-14 09:08:11.510022034 +0000 UTC m=+0.058014079 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:08:11 localhost podman[103630]: 2025-10-14 09:08:11.728479443 +0000 UTC m=+0.276471568 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 14 05:08:11 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:08:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:08:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:08:26 localhost podman[103661]: 2025-10-14 09:08:26.554482669 +0000 UTC m=+0.092161067 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 14 05:08:26 localhost podman[103661]: 2025-10-14 09:08:26.568104905 +0000 UTC m=+0.105783313 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, distribution-scope=public, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc.) Oct 14 05:08:26 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:08:26 localhost podman[103660]: 2025-10-14 09:08:26.657847326 +0000 UTC m=+0.197348694 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=2, config_id=tripleo_step3, vcs-type=git, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc.) Oct 14 05:08:26 localhost podman[103660]: 2025-10-14 09:08:26.670101965 +0000 UTC m=+0.209603363 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-type=git) Oct 14 05:08:26 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:08:35 localhost podman[103701]: 2025-10-14 09:08:35.554264142 +0000 UTC m=+0.087137572 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-cron, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=logrotate_crond) Oct 14 05:08:35 localhost podman[103701]: 2025-10-14 09:08:35.590088445 +0000 UTC m=+0.122961885 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, release=1, tcib_managed=true, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 05:08:35 localhost podman[103700]: 2025-10-14 09:08:35.604902934 +0000 UTC m=+0.140283211 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true) Oct 14 05:08:35 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:08:35 localhost podman[103700]: 2025-10-14 09:08:35.638136776 +0000 UTC m=+0.173517043 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git) Oct 14 05:08:35 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:08:35 localhost podman[103702]: 2025-10-14 09:08:35.654204088 +0000 UTC m=+0.183526741 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Oct 14 05:08:35 localhost podman[103702]: 2025-10-14 09:08:35.706817921 +0000 UTC m=+0.236140554 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, version=17.1.9, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 14 05:08:35 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:08:37 localhost podman[103776]: 2025-10-14 09:08:37.56080389 +0000 UTC m=+0.092814015 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:08:37 localhost podman[103776]: 2025-10-14 09:08:37.597640299 +0000 UTC m=+0.129650354 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Oct 14 05:08:37 localhost podman[103776]: unhealthy Oct 14 05:08:37 localhost podman[103777]: 2025-10-14 09:08:37.6148163 +0000 UTC m=+0.144016619 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, release=1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, container_name=nova_compute, maintainer=OpenStack TripleO Team) Oct 14 05:08:37 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:08:37 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:08:37 localhost systemd[1]: tmp-crun.WYGa75.mount: Deactivated successfully. Oct 14 05:08:37 localhost podman[103775]: 2025-10-14 09:08:37.668945314 +0000 UTC m=+0.202253784 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target) Oct 14 05:08:37 localhost podman[103777]: 2025-10-14 09:08:37.675287626 +0000 UTC m=+0.204487895 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, release=1) Oct 14 05:08:37 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:08:37 localhost podman[103774]: 2025-10-14 09:08:37.719267336 +0000 UTC m=+0.255493334 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 14 05:08:37 localhost podman[103774]: 2025-10-14 09:08:37.734420354 +0000 UTC m=+0.270646342 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, distribution-scope=public, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container) Oct 14 05:08:37 localhost podman[103774]: unhealthy Oct 14 05:08:37 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:08:37 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:08:38 localhost podman[103775]: 2025-10-14 09:08:38.043293242 +0000 UTC m=+0.576601662 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible) Oct 14 05:08:38 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:08:42 localhost systemd[1]: tmp-crun.ezgFIk.mount: Deactivated successfully. Oct 14 05:08:42 localhost podman[103860]: 2025-10-14 09:08:42.557218351 +0000 UTC m=+0.097271934 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 05:08:42 localhost podman[103860]: 2025-10-14 09:08:42.775755262 +0000 UTC m=+0.315808825 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step1) Oct 14 05:08:42 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:08:52 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:08:52 localhost recover_tripleo_nova_virtqemud[103892]: 62532 Oct 14 05:08:52 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:08:52 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:08:57 localhost podman[103894]: 2025-10-14 09:08:57.558008804 +0000 UTC m=+0.090808060 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team) Oct 14 05:08:57 localhost podman[103893]: 2025-10-14 09:08:57.606675261 +0000 UTC m=+0.143824154 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public) Oct 14 05:08:57 localhost podman[103893]: 2025-10-14 09:08:57.615997703 +0000 UTC m=+0.153146636 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=2, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git) Oct 14 05:08:57 localhost podman[103894]: 2025-10-14 09:08:57.624523562 +0000 UTC m=+0.157322818 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid) Oct 14 05:08:57 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:08:57 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:09:06 localhost podman[104009]: 2025-10-14 09:09:06.557449258 +0000 UTC m=+0.091076129 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, release=1, build-date=2025-07-21T14:45:33, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 05:09:06 localhost podman[104011]: 2025-10-14 09:09:06.617917952 +0000 UTC m=+0.144633326 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 05:09:06 localhost podman[104010]: 2025-10-14 09:09:06.662851389 +0000 UTC m=+0.190555691 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 14 05:09:06 localhost podman[104009]: 2025-10-14 09:09:06.669325583 +0000 UTC m=+0.202952404 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 05:09:06 localhost podman[104010]: 2025-10-14 09:09:06.676537207 +0000 UTC m=+0.204241479 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, version=17.1.9, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:09:06 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:09:06 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:09:06 localhost podman[104011]: 2025-10-14 09:09:06.726634333 +0000 UTC m=+0.253349697 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 05:09:06 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:09:08 localhost podman[104088]: 2025-10-14 09:09:08.562697989 +0000 UTC m=+0.085394235 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, build-date=2025-07-21T14:48:37) Oct 14 05:09:08 localhost podman[104088]: 2025-10-14 09:09:08.596152738 +0000 UTC m=+0.118849014 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, release=1, vcs-type=git) Oct 14 05:09:08 localhost systemd[1]: tmp-crun.PgnRot.mount: Deactivated successfully. Oct 14 05:09:08 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:09:08 localhost podman[104081]: 2025-10-14 09:09:08.611410098 +0000 UTC m=+0.144236326 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:09:08 localhost podman[104082]: 2025-10-14 09:09:08.674181114 +0000 UTC m=+0.201793232 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12) Oct 14 05:09:08 localhost podman[104083]: 2025-10-14 09:09:08.724825905 +0000 UTC m=+0.247751497 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 14 05:09:08 localhost podman[104081]: 2025-10-14 09:09:08.750034522 +0000 UTC m=+0.282860750 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, batch=17.1_20250721.1, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:09:08 localhost podman[104081]: unhealthy Oct 14 05:09:08 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:09:08 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:09:08 localhost podman[104083]: 2025-10-14 09:09:08.768351675 +0000 UTC m=+0.291277297 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 14 05:09:08 localhost podman[104083]: unhealthy Oct 14 05:09:08 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:09:08 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:09:09 localhost podman[104082]: 2025-10-14 09:09:09.008572928 +0000 UTC m=+0.536185096 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:09:09 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:09:13 localhost podman[104169]: 2025-10-14 09:09:13.544905589 +0000 UTC m=+0.085336894 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 14 05:09:13 localhost podman[104169]: 2025-10-14 09:09:13.768339612 +0000 UTC m=+0.308770947 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:07:59, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-type=git) Oct 14 05:09:13 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:09:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:09:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:09:28 localhost podman[104197]: 2025-10-14 09:09:28.5616413 +0000 UTC m=+0.073744792 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:09:28 localhost podman[104196]: 2025-10-14 09:09:28.624387386 +0000 UTC m=+0.137934918 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, container_name=collectd, name=rhosp17/openstack-collectd, release=2, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, distribution-scope=public) Oct 14 05:09:28 localhost podman[104197]: 2025-10-14 09:09:28.645126553 +0000 UTC m=+0.157230045 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container) Oct 14 05:09:28 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:09:28 localhost podman[104196]: 2025-10-14 09:09:28.663392084 +0000 UTC m=+0.176939666 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 14 05:09:28 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:09:37 localhost systemd[1]: tmp-crun.dJF5No.mount: Deactivated successfully. Oct 14 05:09:37 localhost podman[104239]: 2025-10-14 09:09:37.555485396 +0000 UTC m=+0.092789593 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.9, container_name=logrotate_crond, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Oct 14 05:09:37 localhost podman[104239]: 2025-10-14 09:09:37.567238202 +0000 UTC m=+0.104542459 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, distribution-scope=public, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:09:37 localhost podman[104238]: 2025-10-14 09:09:37.527830903 +0000 UTC m=+0.073150886 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, distribution-scope=public, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git) Oct 14 05:09:37 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:09:37 localhost podman[104245]: 2025-10-14 09:09:37.645302069 +0000 UTC m=+0.181548828 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 05:09:37 localhost podman[104238]: 2025-10-14 09:09:37.662957153 +0000 UTC m=+0.208277166 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 14 05:09:37 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:09:37 localhost podman[104245]: 2025-10-14 09:09:37.708209429 +0000 UTC m=+0.244456228 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=) Oct 14 05:09:37 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:09:39 localhost podman[104307]: 2025-10-14 09:09:39.551350537 +0000 UTC m=+0.089832785 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:09:39 localhost podman[104307]: 2025-10-14 09:09:39.591221168 +0000 UTC m=+0.129703426 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, release=1, distribution-scope=public, batch=17.1_20250721.1) Oct 14 05:09:39 localhost podman[104307]: unhealthy Oct 14 05:09:39 localhost podman[104308]: 2025-10-14 09:09:39.601119484 +0000 UTC m=+0.136207481 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, architecture=x86_64, tcib_managed=true) Oct 14 05:09:39 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:09:39 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:09:39 localhost podman[104315]: 2025-10-14 09:09:39.664028915 +0000 UTC m=+0.189492793 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible) Oct 14 05:09:39 localhost podman[104315]: 2025-10-14 09:09:39.69291611 +0000 UTC m=+0.218380068 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:09:39 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Deactivated successfully. Oct 14 05:09:39 localhost podman[104309]: 2025-10-14 09:09:39.777105462 +0000 UTC m=+0.304804100 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 05:09:39 localhost podman[104309]: 2025-10-14 09:09:39.795049724 +0000 UTC m=+0.322748352 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:09:39 localhost podman[104309]: unhealthy Oct 14 05:09:39 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:09:39 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:09:39 localhost podman[104308]: 2025-10-14 09:09:39.990418563 +0000 UTC m=+0.525506630 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1) Oct 14 05:09:40 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:09:40 localhost systemd[1]: tmp-crun.dc4eZe.mount: Deactivated successfully. Oct 14 05:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:09:44 localhost podman[104394]: 2025-10-14 09:09:44.549030291 +0000 UTC m=+0.088121668 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:09:44 localhost podman[104394]: 2025-10-14 09:09:44.752285362 +0000 UTC m=+0.291376729 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, version=17.1.9, container_name=metrics_qdr, tcib_managed=true) Oct 14 05:09:44 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:09:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:09:59 localhost recover_tripleo_nova_virtqemud[104427]: 62532 Oct 14 05:09:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:09:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:09:59 localhost podman[104425]: 2025-10-14 09:09:59.550938225 +0000 UTC m=+0.087989305 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 05:09:59 localhost podman[104425]: 2025-10-14 09:09:59.560281737 +0000 UTC m=+0.097332757 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:27:15) Oct 14 05:09:59 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:09:59 localhost podman[104424]: 2025-10-14 09:09:59.65461745 +0000 UTC m=+0.191668450 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git) Oct 14 05:09:59 localhost podman[104424]: 2025-10-14 09:09:59.66349818 +0000 UTC m=+0.200549210 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, release=2, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3) Oct 14 05:09:59 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:10:08 localhost podman[104541]: 2025-10-14 09:10:08.542717304 +0000 UTC m=+0.075468358 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 14 05:10:08 localhost podman[104541]: 2025-10-14 09:10:08.575071684 +0000 UTC m=+0.107822738 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:10:08 localhost podman[104542]: 2025-10-14 09:10:08.602046288 +0000 UTC m=+0.133392954 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52) Oct 14 05:10:08 localhost podman[104542]: 2025-10-14 09:10:08.643274666 +0000 UTC m=+0.174621292 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container) Oct 14 05:10:08 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:10:08 localhost podman[104543]: 2025-10-14 09:10:08.660818577 +0000 UTC m=+0.188283769 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 14 05:10:08 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:10:08 localhost podman[104543]: 2025-10-14 09:10:08.697218795 +0000 UTC m=+0.224683977 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12) Oct 14 05:10:08 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:10:10 localhost podman[104616]: 2025-10-14 09:10:10.561025338 +0000 UTC m=+0.090213366 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:10:10 localhost podman[104616]: 2025-10-14 09:10:10.603496319 +0000 UTC m=+0.132684297 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1) Oct 14 05:10:10 localhost podman[104616]: unhealthy Oct 14 05:10:10 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:10 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:10:10 localhost podman[104614]: 2025-10-14 09:10:10.605826231 +0000 UTC m=+0.141528273 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:10:10 localhost podman[104617]: 2025-10-14 09:10:10.664771905 +0000 UTC m=+0.190375176 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, version=17.1.9, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git) Oct 14 05:10:10 localhost podman[104615]: 2025-10-14 09:10:10.708858829 +0000 UTC m=+0.239483825 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, container_name=nova_migration_target, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37) Oct 14 05:10:10 localhost podman[104614]: 2025-10-14 09:10:10.736889882 +0000 UTC m=+0.272591974 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:10:10 localhost podman[104614]: unhealthy Oct 14 05:10:10 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:10 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:10:10 localhost podman[104617]: 2025-10-14 09:10:10.761198845 +0000 UTC m=+0.286802156 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, container_name=nova_compute, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1) Oct 14 05:10:10 localhost podman[104617]: unhealthy Oct 14 05:10:10 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:10 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:10:11 localhost podman[104615]: 2025-10-14 09:10:11.125483472 +0000 UTC m=+0.656108458 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1) Oct 14 05:10:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:10:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:10:15 localhost podman[104699]: 2025-10-14 09:10:15.556946545 +0000 UTC m=+0.094578313 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, vcs-type=git, container_name=metrics_qdr, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 14 05:10:15 localhost podman[104699]: 2025-10-14 09:10:15.740110125 +0000 UTC m=+0.277741873 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59) Oct 14 05:10:15 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:10:30 localhost podman[104729]: 2025-10-14 09:10:30.549024665 +0000 UTC m=+0.080850843 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-collectd, config_id=tripleo_step3, container_name=collectd, managed_by=tripleo_ansible, release=2, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:10:30 localhost podman[104729]: 2025-10-14 09:10:30.559849227 +0000 UTC m=+0.091675375 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-collectd) Oct 14 05:10:30 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:10:30 localhost systemd[1]: tmp-crun.TS1jWp.mount: Deactivated successfully. Oct 14 05:10:30 localhost podman[104730]: 2025-10-14 09:10:30.61696074 +0000 UTC m=+0.143360852 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, release=1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 14 05:10:30 localhost podman[104730]: 2025-10-14 09:10:30.626105026 +0000 UTC m=+0.152505178 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:10:30 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:10:39 localhost podman[104768]: 2025-10-14 09:10:39.551342895 +0000 UTC m=+0.090945844 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, release=1, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Oct 14 05:10:39 localhost podman[104770]: 2025-10-14 09:10:39.531588355 +0000 UTC m=+0.068806139 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Oct 14 05:10:39 localhost podman[104769]: 2025-10-14 09:10:39.593411435 +0000 UTC m=+0.130419254 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.9, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:10:39 localhost podman[104769]: 2025-10-14 09:10:39.600131436 +0000 UTC m=+0.137139275 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, release=1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 14 05:10:39 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:10:39 localhost podman[104770]: 2025-10-14 09:10:39.61103424 +0000 UTC m=+0.148251984 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=) Oct 14 05:10:39 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:10:39 localhost podman[104768]: 2025-10-14 09:10:39.653183532 +0000 UTC m=+0.192786511 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 14 05:10:39 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:10:41 localhost podman[104840]: 2025-10-14 09:10:41.54737924 +0000 UTC m=+0.078993803 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:10:41 localhost podman[104840]: 2025-10-14 09:10:41.562580048 +0000 UTC m=+0.094194581 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:10:41 localhost podman[104840]: unhealthy Oct 14 05:10:41 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:41 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:10:41 localhost podman[104839]: 2025-10-14 09:10:41.554203153 +0000 UTC m=+0.086835804 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, tcib_managed=true, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, vcs-type=git) Oct 14 05:10:41 localhost podman[104838]: 2025-10-14 09:10:41.647524071 +0000 UTC m=+0.185566108 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, distribution-scope=public) Oct 14 05:10:41 localhost podman[104841]: 2025-10-14 09:10:41.607201977 +0000 UTC m=+0.133188599 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:10:41 localhost podman[104841]: 2025-10-14 09:10:41.690031962 +0000 UTC m=+0.216018644 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, release=1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Oct 14 05:10:41 localhost podman[104841]: unhealthy Oct 14 05:10:41 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:41 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:10:41 localhost podman[104838]: 2025-10-14 09:10:41.741893295 +0000 UTC m=+0.279935332 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 14 05:10:41 localhost podman[104838]: unhealthy Oct 14 05:10:41 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:10:41 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:10:41 localhost podman[104839]: 2025-10-14 09:10:41.940245814 +0000 UTC m=+0.472878455 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:10:41 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:10:46 localhost podman[104919]: 2025-10-14 09:10:46.534268825 +0000 UTC m=+0.073911096 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, architecture=x86_64, vcs-type=git, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd) Oct 14 05:10:46 localhost podman[104919]: 2025-10-14 09:10:46.721171847 +0000 UTC m=+0.260814128 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.9) Oct 14 05:10:46 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:11:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:11:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:11:01 localhost podman[104949]: 2025-10-14 09:11:01.534252464 +0000 UTC m=+0.075667884 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 14 05:11:01 localhost podman[104949]: 2025-10-14 09:11:01.543617416 +0000 UTC m=+0.085032826 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 14 05:11:01 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:11:01 localhost podman[104950]: 2025-10-14 09:11:01.592834578 +0000 UTC m=+0.129987383 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 14 05:11:01 localhost podman[104950]: 2025-10-14 09:11:01.632268468 +0000 UTC m=+0.169421283 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, distribution-scope=public, release=1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 14 05:11:01 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:11:10 localhost systemd[1]: tmp-crun.8Jk1SE.mount: Deactivated successfully. Oct 14 05:11:10 localhost podman[105065]: 2025-10-14 09:11:10.556945362 +0000 UTC m=+0.086926266 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container) Oct 14 05:11:10 localhost systemd[1]: tmp-crun.KRgUuo.mount: Deactivated successfully. Oct 14 05:11:10 localhost podman[105067]: 2025-10-14 09:11:10.607961653 +0000 UTC m=+0.134300299 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1) Oct 14 05:11:10 localhost podman[105065]: 2025-10-14 09:11:10.618838245 +0000 UTC m=+0.148819129 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, architecture=x86_64) Oct 14 05:11:10 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:11:10 localhost podman[105067]: 2025-10-14 09:11:10.670200295 +0000 UTC m=+0.196538971 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-type=git) Oct 14 05:11:10 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:11:10 localhost podman[105066]: 2025-10-14 09:11:10.750586784 +0000 UTC m=+0.281324378 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12) Oct 14 05:11:10 localhost podman[105066]: 2025-10-14 09:11:10.762113984 +0000 UTC m=+0.292851618 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:11:10 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:11:12 localhost systemd[1]: tmp-crun.zQw0pK.mount: Deactivated successfully. Oct 14 05:11:12 localhost podman[105142]: 2025-10-14 09:11:12.574008012 +0000 UTC m=+0.106331378 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 05:11:12 localhost podman[105140]: 2025-10-14 09:11:12.539147036 +0000 UTC m=+0.081116890 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git) Oct 14 05:11:12 localhost podman[105142]: 2025-10-14 09:11:12.59327311 +0000 UTC m=+0.125596486 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true) Oct 14 05:11:12 localhost podman[105142]: unhealthy Oct 14 05:11:12 localhost podman[105148]: 2025-10-14 09:11:12.655924472 +0000 UTC m=+0.186643204 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, version=17.1.9, container_name=nova_compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 05:11:12 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:12 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:11:12 localhost podman[105140]: 2025-10-14 09:11:12.671738928 +0000 UTC m=+0.213708762 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.9, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container) Oct 14 05:11:12 localhost podman[105140]: unhealthy Oct 14 05:11:12 localhost podman[105148]: 2025-10-14 09:11:12.680232906 +0000 UTC m=+0.210951618 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, version=17.1.9, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 14 05:11:12 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:12 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:11:12 localhost podman[105148]: unhealthy Oct 14 05:11:12 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:12 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:11:12 localhost podman[105141]: 2025-10-14 09:11:12.750824372 +0000 UTC m=+0.288980984 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team) Oct 14 05:11:13 localhost podman[105141]: 2025-10-14 09:11:13.094107925 +0000 UTC m=+0.632264447 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=) Oct 14 05:11:13 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:11:15 localhost sshd[105221]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:11:15 localhost systemd-logind[760]: New session 35 of user zuul. Oct 14 05:11:15 localhost systemd[1]: Started Session 35 of User zuul. Oct 14 05:11:16 localhost python3.9[105316]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:11:16 localhost systemd[1]: tmp-crun.nqus8l.mount: Deactivated successfully. Oct 14 05:11:16 localhost podman[105411]: 2025-10-14 09:11:16.918486669 +0000 UTC m=+0.097108800 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, distribution-scope=public, config_id=tripleo_step1) Oct 14 05:11:17 localhost python3.9[105410]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:11:17 localhost podman[105411]: 2025-10-14 09:11:17.114208808 +0000 UTC m=+0.292830919 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 14 05:11:17 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:11:17 localhost python3.9[105531]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:11:18 localhost python3.9[105625]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:11:19 localhost python3.9[105718]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:11:20 localhost python3.9[105809]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 14 05:11:21 localhost python3.9[105899]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:11:22 localhost python3.9[105991]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 14 05:11:23 localhost python3.9[106081]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:11:24 localhost python3.9[106129]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:11:25 localhost systemd[1]: session-35.scope: Deactivated successfully. Oct 14 05:11:25 localhost systemd[1]: session-35.scope: Consumed 5.096s CPU time. Oct 14 05:11:25 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Oct 14 05:11:25 localhost systemd-logind[760]: Removed session 35. Oct 14 05:11:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38533 DF PROTO=TCP SPT=35562 DPT=9100 SEQ=2067403008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE23930000000001030307) Oct 14 05:11:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34846 DF PROTO=TCP SPT=42986 DPT=9882 SEQ=127950400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE24390000000001030307) Oct 14 05:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38534 DF PROTO=TCP SPT=35562 DPT=9100 SEQ=2067403008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE27A90000000001030307) Oct 14 05:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34847 DF PROTO=TCP SPT=42986 DPT=9882 SEQ=127950400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE28290000000001030307) Oct 14 05:11:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:11:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:11:32 localhost podman[106146]: 2025-10-14 09:11:32.555832046 +0000 UTC m=+0.093023060 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, version=17.1.9, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Oct 14 05:11:32 localhost podman[106145]: 2025-10-14 09:11:32.604514453 +0000 UTC m=+0.142539179 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, architecture=x86_64, release=2, build-date=2025-07-21T13:04:03, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 14 05:11:32 localhost podman[106146]: 2025-10-14 09:11:32.605455849 +0000 UTC m=+0.142646943 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64) Oct 14 05:11:32 localhost podman[106145]: 2025-10-14 09:11:32.643346347 +0000 UTC m=+0.181371123 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:04:03, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=collectd) Oct 14 05:11:32 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:11:32 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52860 DF PROTO=TCP SPT=57030 DPT=9105 SEQ=3636033749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE2EDE0000000001030307) Oct 14 05:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38535 DF PROTO=TCP SPT=35562 DPT=9100 SEQ=2067403008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE2FAA0000000001030307) Oct 14 05:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34848 DF PROTO=TCP SPT=42986 DPT=9882 SEQ=127950400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE302A0000000001030307) Oct 14 05:11:34 localhost sshd[106182]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52861 DF PROTO=TCP SPT=57030 DPT=9105 SEQ=3636033749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE32E90000000001030307) Oct 14 05:11:34 localhost systemd-logind[760]: New session 36 of user zuul. Oct 14 05:11:34 localhost systemd[1]: Started Session 36 of User zuul. Oct 14 05:11:35 localhost python3.9[106277]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:11:35 localhost systemd[1]: Reloading. Oct 14 05:11:35 localhost systemd-sysv-generator[106303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:11:35 localhost systemd-rc-local-generator[106299]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:11:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:11:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52862 DF PROTO=TCP SPT=57030 DPT=9105 SEQ=3636033749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE3AE90000000001030307) Oct 14 05:11:36 localhost python3.9[106402]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38536 DF PROTO=TCP SPT=35562 DPT=9100 SEQ=2067403008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE3F690000000001030307) Oct 14 05:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34849 DF PROTO=TCP SPT=42986 DPT=9882 SEQ=127950400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE3FE90000000001030307) Oct 14 05:11:37 localhost network[106419]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:11:37 localhost network[106420]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:11:37 localhost network[106421]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:11:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:11:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52863 DF PROTO=TCP SPT=57030 DPT=9105 SEQ=3636033749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE4AAA0000000001030307) Oct 14 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:11:40 localhost podman[106489]: 2025-10-14 09:11:40.806664538 +0000 UTC m=+0.122009668 container health_status 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 14 05:11:40 localhost systemd[1]: tmp-crun.v8casz.mount: Deactivated successfully. Oct 14 05:11:40 localhost podman[106489]: 2025-10-14 09:11:40.871323765 +0000 UTC m=+0.186668885 container exec_died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:11:40 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Deactivated successfully. Oct 14 05:11:40 localhost podman[106497]: 2025-10-14 09:11:40.862214061 +0000 UTC m=+0.151729618 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, release=1) Oct 14 05:11:40 localhost podman[106525]: 2025-10-14 09:11:40.943889185 +0000 UTC m=+0.135677566 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=logrotate_crond) Oct 14 05:11:40 localhost podman[106525]: 2025-10-14 09:11:40.952010833 +0000 UTC m=+0.143799204 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 05:11:40 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:11:40 localhost podman[106497]: 2025-10-14 09:11:40.995234935 +0000 UTC m=+0.284750432 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 14 05:11:41 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:11:43 localhost podman[106602]: 2025-10-14 09:11:43.572588526 +0000 UTC m=+0.102561046 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, architecture=x86_64, container_name=ovn_controller) Oct 14 05:11:43 localhost systemd[1]: tmp-crun.pEc4Q1.mount: Deactivated successfully. Oct 14 05:11:43 localhost podman[106605]: 2025-10-14 09:11:43.630743718 +0000 UTC m=+0.147376750 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_compute, build-date=2025-07-21T14:48:37) Oct 14 05:11:43 localhost podman[106603]: 2025-10-14 09:11:43.637667234 +0000 UTC m=+0.164875340 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, container_name=nova_migration_target, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 14 05:11:43 localhost podman[106605]: 2025-10-14 09:11:43.672082519 +0000 UTC m=+0.188715531 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 05:11:43 localhost podman[106605]: unhealthy Oct 14 05:11:43 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:43 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:11:43 localhost podman[106602]: 2025-10-14 09:11:43.711513558 +0000 UTC m=+0.241486028 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:28:44, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1) Oct 14 05:11:43 localhost podman[106602]: unhealthy Oct 14 05:11:43 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:43 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:11:43 localhost podman[106604]: 2025-10-14 09:11:43.740775445 +0000 UTC m=+0.262435502 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, distribution-scope=public, version=17.1.9) Oct 14 05:11:43 localhost podman[106604]: 2025-10-14 09:11:43.787168541 +0000 UTC m=+0.308828548 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:11:43 localhost podman[106604]: unhealthy Oct 14 05:11:43 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:11:43 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:11:44 localhost podman[106603]: 2025-10-14 09:11:44.032153543 +0000 UTC m=+0.559361649 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 14 05:11:44 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:11:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8769 DF PROTO=TCP SPT=51996 DPT=9101 SEQ=1217966080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE59720000000001030307) Oct 14 05:11:44 localhost python3.9[106774]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:11:44 localhost network[106791]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:11:44 localhost network[106792]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:11:44 localhost network[106793]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:11:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8770 DF PROTO=TCP SPT=51996 DPT=9101 SEQ=1217966080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE5D690000000001030307) Oct 14 05:11:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:11:47 localhost podman[106864]: 2025-10-14 09:11:47.270314318 +0000 UTC m=+0.095134917 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Oct 14 05:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8771 DF PROTO=TCP SPT=51996 DPT=9101 SEQ=1217966080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE65690000000001030307) Oct 14 05:11:47 localhost podman[106864]: 2025-10-14 09:11:47.463479937 +0000 UTC m=+0.288300576 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd) Oct 14 05:11:47 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:11:49 localhost python3.9[107021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:11:49 localhost systemd[1]: Reloading. Oct 14 05:11:50 localhost systemd-rc-local-generator[107051]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:11:50 localhost systemd-sysv-generator[107054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:11:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:11:50 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:11:50 localhost systemd[1]: Starting dnf makecache... Oct 14 05:11:50 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 14 05:11:50 localhost recover_tripleo_nova_virtqemud[107064]: 62532 Oct 14 05:11:50 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:11:50 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:11:50 localhost dnf[107062]: Updating Subscription Management repositories. Oct 14 05:11:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8772 DF PROTO=TCP SPT=51996 DPT=9101 SEQ=1217966080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE75290000000001030307) Oct 14 05:11:52 localhost dnf[107062]: Metadata cache refreshed recently. Oct 14 05:11:52 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 14 05:11:52 localhost systemd[1]: Finished dnf makecache. Oct 14 05:11:52 localhost systemd[1]: dnf-makecache.service: Consumed 2.061s CPU time. Oct 14 05:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12650 DF PROTO=TCP SPT=50666 DPT=9102 SEQ=3555463087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE7EDD0000000001030307) Oct 14 05:11:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12651 DF PROTO=TCP SPT=50666 DPT=9102 SEQ=3555463087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE82E90000000001030307) Oct 14 05:11:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12652 DF PROTO=TCP SPT=50666 DPT=9102 SEQ=3555463087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE8AE90000000001030307) Oct 14 05:12:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23613 DF PROTO=TCP SPT=55464 DPT=9100 SEQ=685225244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE98C30000000001030307) Oct 14 05:12:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49328 DF PROTO=TCP SPT=36380 DPT=9882 SEQ=298874688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE996A0000000001030307) Oct 14 05:12:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12653 DF PROTO=TCP SPT=50666 DPT=9102 SEQ=3555463087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE9AA90000000001030307) Oct 14 05:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23614 DF PROTO=TCP SPT=55464 DPT=9100 SEQ=685225244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE9CE90000000001030307) Oct 14 05:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49329 DF PROTO=TCP SPT=36380 DPT=9882 SEQ=298874688 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FE9D690000000001030307) Oct 14 05:12:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:12:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:12:02 localhost systemd[1]: tmp-crun.j2MfXm.mount: Deactivated successfully. Oct 14 05:12:02 localhost podman[107081]: 2025-10-14 09:12:02.790931046 +0000 UTC m=+0.072691634 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 05:12:02 localhost systemd[1]: tmp-crun.o0NmJV.mount: Deactivated successfully. Oct 14 05:12:02 localhost podman[107080]: 2025-10-14 09:12:02.816283557 +0000 UTC m=+0.097643024 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 14 05:12:02 localhost podman[107080]: 2025-10-14 09:12:02.828113605 +0000 UTC m=+0.109473122 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 14 05:12:02 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:12:02 localhost podman[107081]: 2025-10-14 09:12:02.841163496 +0000 UTC m=+0.122924224 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1) Oct 14 05:12:02 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4228 DF PROTO=TCP SPT=37216 DPT=9105 SEQ=413391446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEA40E0000000001030307) Oct 14 05:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23615 DF PROTO=TCP SPT=55464 DPT=9100 SEQ=685225244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEA4E90000000001030307) Oct 14 05:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23616 DF PROTO=TCP SPT=55464 DPT=9100 SEQ=685225244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEB4A90000000001030307) Oct 14 05:12:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4231 DF PROTO=TCP SPT=37216 DPT=9105 SEQ=413391446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEBFEA0000000001030307) Oct 14 05:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:12:11 localhost podman[107196]: Error: container 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 is not running Oct 14 05:12:11 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Main process exited, code=exited, status=125/n/a Oct 14 05:12:11 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Failed with result 'exit-code'. Oct 14 05:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:12:11 localhost podman[107208]: 2025-10-14 09:12:11.138044514 +0000 UTC m=+0.079499337 container health_status f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 05:12:11 localhost podman[107207]: 2025-10-14 09:12:11.183337991 +0000 UTC m=+0.125696778 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:12:11 localhost podman[107208]: 2025-10-14 09:12:11.19225915 +0000 UTC m=+0.133713943 container exec_died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T15:29:47, release=1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 14 05:12:11 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Deactivated successfully. Oct 14 05:12:11 localhost podman[107207]: 2025-10-14 09:12:11.218040433 +0000 UTC m=+0.160399200 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 05:12:11 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:12:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:12:13 localhost systemd[1]: tmp-crun.rWMQDq.mount: Deactivated successfully. Oct 14 05:12:13 localhost podman[107257]: 2025-10-14 09:12:13.804818508 +0000 UTC m=+0.088965291 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.33.12) Oct 14 05:12:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:12:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:12:13 localhost podman[107257]: 2025-10-14 09:12:13.826606973 +0000 UTC m=+0.110753756 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, release=1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5) Oct 14 05:12:13 localhost podman[107257]: unhealthy Oct 14 05:12:13 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:13 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:12:13 localhost podman[107279]: 2025-10-14 09:12:13.91285631 +0000 UTC m=+0.078990363 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 14 05:12:13 localhost podman[107279]: 2025-10-14 09:12:13.95600372 +0000 UTC m=+0.122137773 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1) Oct 14 05:12:13 localhost podman[107279]: unhealthy Oct 14 05:12:13 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:13 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:12:14 localhost systemd[1]: tmp-crun.LIwBXJ.mount: Deactivated successfully. Oct 14 05:12:14 localhost podman[107280]: 2025-10-14 09:12:14.03193834 +0000 UTC m=+0.194151668 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12) Oct 14 05:12:14 localhost podman[107280]: 2025-10-14 09:12:14.050184129 +0000 UTC m=+0.212397457 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:12:14 localhost podman[107280]: unhealthy Oct 14 05:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:12:14 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:14 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:12:14 localhost podman[107317]: 2025-10-14 09:12:14.162345573 +0000 UTC m=+0.080898825 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 14 05:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37788 DF PROTO=TCP SPT=40406 DPT=9101 SEQ=2499422360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FECEA30000000001030307) Oct 14 05:12:14 localhost podman[107317]: 2025-10-14 09:12:14.539951647 +0000 UTC m=+0.458504939 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 14 05:12:14 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37790 DF PROTO=TCP SPT=40406 DPT=9101 SEQ=2499422360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEDAAA0000000001030307) Oct 14 05:12:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:12:17 localhost podman[107340]: 2025-10-14 09:12:17.78955208 +0000 UTC m=+0.079339872 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 14 05:12:17 localhost podman[107340]: 2025-10-14 09:12:17.985246258 +0000 UTC m=+0.275033990 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Oct 14 05:12:17 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:12:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37791 DF PROTO=TCP SPT=40406 DPT=9101 SEQ=2499422360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEEA690000000001030307) Oct 14 05:12:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16386 DF PROTO=TCP SPT=48958 DPT=9102 SEQ=665858454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEF40E0000000001030307) Oct 14 05:12:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16387 DF PROTO=TCP SPT=48958 DPT=9102 SEQ=665858454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FEF8290000000001030307) Oct 14 05:12:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49649 DF PROTO=TCP SPT=43580 DPT=9100 SEQ=1487190902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF0DF40000000001030307) Oct 14 05:12:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57030 DF PROTO=TCP SPT=41664 DPT=9882 SEQ=3559877590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF0E9A0000000001030307) Oct 14 05:12:32 localhost podman[107065]: time="2025-10-14T09:12:32Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Oct 14 05:12:32 localhost systemd[1]: libpod-1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.scope: Deactivated successfully. Oct 14 05:12:32 localhost systemd[1]: libpod-1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.scope: Consumed 6.215s CPU time. Oct 14 05:12:32 localhost podman[107065]: 2025-10-14 09:12:32.403211363 +0000 UTC m=+42.091139229 container stop 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:12:32 localhost podman[107065]: 2025-10-14 09:12:32.432541971 +0000 UTC m=+42.120469867 container died 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.timer: Deactivated successfully. Oct 14 05:12:32 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8. Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Failed to open /run/systemd/transient/1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: No such file or directory Oct 14 05:12:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8-userdata-shm.mount: Deactivated successfully. Oct 14 05:12:32 localhost systemd[1]: var-lib-containers-storage-overlay-d7568b0c1b8802be3535f9c50fed9171f7f66ae1eaebd8b147d74d0e23471f5e-merged.mount: Deactivated successfully. Oct 14 05:12:32 localhost podman[107065]: 2025-10-14 09:12:32.497247589 +0000 UTC m=+42.185175455 container cleanup 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, vcs-type=git, architecture=x86_64, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:12:32 localhost podman[107065]: ceilometer_agent_compute Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.timer: Failed to open /run/systemd/transient/1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.timer: No such file or directory Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Failed to open /run/systemd/transient/1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: No such file or directory Oct 14 05:12:32 localhost podman[107369]: 2025-10-14 09:12:32.511464261 +0000 UTC m=+0.092607569 container cleanup 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 14 05:12:32 localhost systemd[1]: libpod-conmon-1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.scope: Deactivated successfully. Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.timer: Failed to open /run/systemd/transient/1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.timer: No such file or directory Oct 14 05:12:32 localhost systemd[1]: 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: Failed to open /run/systemd/transient/1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8.service: No such file or directory Oct 14 05:12:32 localhost podman[107382]: 2025-10-14 09:12:32.634538717 +0000 UTC m=+0.086389452 container cleanup 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:12:32 localhost podman[107382]: ceilometer_agent_compute Oct 14 05:12:32 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Oct 14 05:12:32 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 14 05:12:32 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.191s CPU time, no IO. Oct 14 05:12:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:12:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:12:33 localhost podman[107488]: 2025-10-14 09:12:33.139122633 +0000 UTC m=+0.070483084 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, release=1, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1) Oct 14 05:12:33 localhost podman[107488]: 2025-10-14 09:12:33.154184938 +0000 UTC m=+0.085545359 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, release=1, name=rhosp17/openstack-iscsid) Oct 14 05:12:33 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:12:33 localhost podman[107487]: 2025-10-14 09:12:33.252935521 +0000 UTC m=+0.183448670 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, distribution-scope=public, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, version=17.1.9, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Oct 14 05:12:33 localhost podman[107487]: 2025-10-14 09:12:33.263216578 +0000 UTC m=+0.193729747 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, release=2, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 14 05:12:33 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:12:33 localhost python3.9[107486]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:12:33 localhost systemd[1]: Reloading. Oct 14 05:12:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49651 DF PROTO=TCP SPT=43580 DPT=9100 SEQ=1487190902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF19E90000000001030307) Oct 14 05:12:33 localhost systemd-rc-local-generator[107547]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:12:33 localhost systemd-sysv-generator[107552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:12:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:12:33 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Oct 14 05:12:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49652 DF PROTO=TCP SPT=43580 DPT=9100 SEQ=1487190902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF29A90000000001030307) Oct 14 05:12:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49841 DF PROTO=TCP SPT=57754 DPT=9105 SEQ=3490834444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF34E90000000001030307) Oct 14 05:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5658 writes, 25K keys, 5658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5658 writes, 708 syncs, 7.99 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 8 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:12:41 localhost podman[107578]: Error: container f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 is not running Oct 14 05:12:41 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Main process exited, code=exited, status=125/n/a Oct 14 05:12:41 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed with result 'exit-code'. Oct 14 05:12:41 localhost podman[107577]: 2025-10-14 09:12:41.612428985 +0000 UTC m=+0.150946187 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-07-21T13:07:52, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, version=17.1.9, release=1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 14 05:12:41 localhost podman[107577]: 2025-10-14 09:12:41.651305149 +0000 UTC m=+0.189822311 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:12:41 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:12:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:12:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:12:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:12:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37071 DF PROTO=TCP SPT=59510 DPT=9101 SEQ=1004694180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF43D30000000001030307) Oct 14 05:12:44 localhost podman[107609]: 2025-10-14 09:12:44.304061767 +0000 UTC m=+0.091480859 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true) Oct 14 05:12:44 localhost podman[107609]: 2025-10-14 09:12:44.346131317 +0000 UTC m=+0.133550359 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, release=1) Oct 14 05:12:44 localhost podman[107609]: unhealthy Oct 14 05:12:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:44 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:12:44 localhost podman[107610]: 2025-10-14 09:12:44.3585317 +0000 UTC m=+0.141771559 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:12:44 localhost podman[107611]: 2025-10-14 09:12:44.412267143 +0000 UTC m=+0.191981348 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:12:44 localhost podman[107610]: 2025-10-14 09:12:44.429703212 +0000 UTC m=+0.212942891 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 14 05:12:44 localhost podman[107610]: unhealthy Oct 14 05:12:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:44 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:12:44 localhost podman[107611]: 2025-10-14 09:12:44.48696727 +0000 UTC m=+0.266681455 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute) Oct 14 05:12:44 localhost podman[107611]: unhealthy Oct 14 05:12:44 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:12:44 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:12:45 localhost podman[107671]: 2025-10-14 09:12:45.542875017 +0000 UTC m=+0.083097764 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:12:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35356 SEQ=2839280146 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:12:45 localhost podman[107671]: 2025-10-14 09:12:45.937379666 +0000 UTC m=+0.477602363 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true) Oct 14 05:12:45 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4839 writes, 21K keys, 4839 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4839 writes, 659 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 8 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:12:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:12:48 localhost podman[107694]: 2025-10-14 09:12:48.3000197 +0000 UTC m=+0.087005899 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:12:48 localhost podman[107694]: 2025-10-14 09:12:48.530311346 +0000 UTC m=+0.317297515 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12) Oct 14 05:12:48 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:12:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37074 DF PROTO=TCP SPT=59510 DPT=9101 SEQ=1004694180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF5FA90000000001030307) Oct 14 05:12:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50573 DF PROTO=TCP SPT=38218 DPT=9102 SEQ=1624874383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF693E0000000001030307) Oct 14 05:12:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50574 DF PROTO=TCP SPT=38218 DPT=9102 SEQ=1624874383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF6D290000000001030307) Oct 14 05:12:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35356 SEQ=2839280146 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52446 DF PROTO=TCP SPT=43840 DPT=9882 SEQ=2040284484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF83CA0000000001030307) Oct 14 05:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:13:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32526 DF PROTO=TCP SPT=35114 DPT=9100 SEQ=3890610489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF8F290000000001030307) Oct 14 05:13:03 localhost podman[107723]: 2025-10-14 09:13:03.549646218 +0000 UTC m=+0.087498432 container health_status c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, version=17.1.9, config_id=tripleo_step3) Oct 14 05:13:03 localhost podman[107723]: 2025-10-14 09:13:03.584910715 +0000 UTC m=+0.122762939 container exec_died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, container_name=collectd, config_id=tripleo_step3) Oct 14 05:13:03 localhost systemd[1]: tmp-crun.zF6vW0.mount: Deactivated successfully. Oct 14 05:13:03 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Deactivated successfully. Oct 14 05:13:03 localhost podman[107724]: 2025-10-14 09:13:03.601886881 +0000 UTC m=+0.137283990 container health_status df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 14 05:13:03 localhost podman[107724]: 2025-10-14 09:13:03.610383399 +0000 UTC m=+0.145780478 container exec_died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, version=17.1.9) Oct 14 05:13:03 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Deactivated successfully. Oct 14 05:13:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32527 DF PROTO=TCP SPT=35114 DPT=9100 SEQ=3890610489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FF9EE90000000001030307) Oct 14 05:13:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45345 DF PROTO=TCP SPT=46524 DPT=9105 SEQ=984942483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FFAA290000000001030307) Oct 14 05:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:13:12 localhost podman[107840]: Error: container f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 is not running Oct 14 05:13:12 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Main process exited, code=exited, status=125/n/a Oct 14 05:13:12 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed with result 'exit-code'. Oct 14 05:13:12 localhost podman[107839]: 2025-10-14 09:13:12.103626794 +0000 UTC m=+0.142683165 container health_status 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, container_name=logrotate_crond, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 14 05:13:12 localhost podman[107839]: 2025-10-14 09:13:12.113069898 +0000 UTC m=+0.152126209 container exec_died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 05:13:12 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Deactivated successfully. Oct 14 05:13:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41852 DF PROTO=TCP SPT=38476 DPT=9101 SEQ=409122982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FFB9030000000001030307) Oct 14 05:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:13:14 localhost podman[107871]: 2025-10-14 09:13:14.542787584 +0000 UTC m=+0.082643631 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44) Oct 14 05:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:13:14 localhost podman[107872]: 2025-10-14 09:13:14.598935723 +0000 UTC m=+0.132820079 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., architecture=x86_64) Oct 14 05:13:14 localhost podman[107871]: 2025-10-14 09:13:14.617479531 +0000 UTC m=+0.157335638 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 14 05:13:14 localhost podman[107871]: unhealthy Oct 14 05:13:14 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:14 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:13:14 localhost podman[107872]: 2025-10-14 09:13:14.648130065 +0000 UTC m=+0.182014371 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:13:14 localhost podman[107872]: unhealthy Oct 14 05:13:14 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:14 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:13:14 localhost podman[107901]: 2025-10-14 09:13:14.712804292 +0000 UTC m=+0.138270016 container health_status a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step5) Oct 14 05:13:14 localhost podman[107901]: 2025-10-14 09:13:14.729608204 +0000 UTC m=+0.155073928 container exec_died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:13:14 localhost podman[107901]: unhealthy Oct 14 05:13:14 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:14 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:13:15 localhost podman[107562]: time="2025-10-14T09:13:15Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Oct 14 05:13:15 localhost systemd[1]: libpod-f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.scope: Deactivated successfully. Oct 14 05:13:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35370 SEQ=3745456705 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:15 localhost systemd[1]: libpod-f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.scope: Consumed 7.165s CPU time. Oct 14 05:13:15 localhost podman[107562]: 2025-10-14 09:13:15.944183743 +0000 UTC m=+42.104223993 container died f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 14 05:13:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:13:15 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.timer: Deactivated successfully. Oct 14 05:13:15 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13. Oct 14 05:13:15 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed to open /run/systemd/transient/f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: No such file or directory Oct 14 05:13:15 localhost systemd[1]: tmp-crun.P54tTF.mount: Deactivated successfully. Oct 14 05:13:16 localhost podman[107939]: 2025-10-14 09:13:16.075852041 +0000 UTC m=+0.091728756 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9) Oct 14 05:13:16 localhost podman[107562]: 2025-10-14 09:13:16.130583352 +0000 UTC m=+42.290623612 container cleanup f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true) Oct 14 05:13:16 localhost podman[107562]: ceilometer_agent_ipmi Oct 14 05:13:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.timer: Failed to open /run/systemd/transient/f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.timer: No such file or directory Oct 14 05:13:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed to open /run/systemd/transient/f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: No such file or directory Oct 14 05:13:16 localhost podman[107932]: 2025-10-14 09:13:16.146840869 +0000 UTC m=+0.191740453 container cleanup f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 14 05:13:16 localhost systemd[1]: libpod-conmon-f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.scope: Deactivated successfully. Oct 14 05:13:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.timer: Failed to open /run/systemd/transient/f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.timer: No such file or directory Oct 14 05:13:16 localhost systemd[1]: f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: Failed to open /run/systemd/transient/f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13.service: No such file or directory Oct 14 05:13:16 localhost podman[107968]: 2025-10-14 09:13:16.245683234 +0000 UTC m=+0.073908747 container cleanup f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi) Oct 14 05:13:16 localhost podman[107968]: ceilometer_agent_ipmi Oct 14 05:13:16 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Oct 14 05:13:16 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Oct 14 05:13:16 localhost podman[107939]: 2025-10-14 09:13:16.512384299 +0000 UTC m=+0.528260994 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:13:16 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:13:16 localhost systemd[1]: var-lib-containers-storage-overlay-141f8240b493de051d128d8af481e4eecafe4083c7fc86019e21768efb6df1ea-merged.mount: Deactivated successfully. Oct 14 05:13:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5d704fac1396b8c19426b54f52a12dc317152f95cde6782abb3b3a21321ac13-userdata-shm.mount: Deactivated successfully. Oct 14 05:13:17 localhost python3.9[108074]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:17 localhost systemd[1]: Reloading. Oct 14 05:13:17 localhost systemd-sysv-generator[108102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:13:17 localhost systemd-rc-local-generator[108098]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:13:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:13:17 localhost systemd[1]: Stopping collectd container... Oct 14 05:13:17 localhost systemd[1]: tmp-crun.5MszGM.mount: Deactivated successfully. Oct 14 05:13:18 localhost systemd[1]: libpod-c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.scope: Deactivated successfully. Oct 14 05:13:18 localhost systemd[1]: libpod-c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.scope: Consumed 2.311s CPU time. Oct 14 05:13:18 localhost podman[108115]: 2025-10-14 09:13:18.401260184 +0000 UTC m=+0.950718882 container died c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, release=2, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03) Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.timer: Deactivated successfully. Oct 14 05:13:18 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611. Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Failed to open /run/systemd/transient/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: No such file or directory Oct 14 05:13:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611-userdata-shm.mount: Deactivated successfully. Oct 14 05:13:18 localhost podman[108115]: 2025-10-14 09:13:18.462333235 +0000 UTC m=+1.011791903 container cleanup c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public) Oct 14 05:13:18 localhost podman[108115]: collectd Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.timer: Failed to open /run/systemd/transient/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.timer: No such file or directory Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Failed to open /run/systemd/transient/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: No such file or directory Oct 14 05:13:18 localhost podman[108127]: 2025-10-14 09:13:18.508165046 +0000 UTC m=+0.096259516 container cleanup c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 14 05:13:18 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:18 localhost systemd[1]: libpod-conmon-c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.scope: Deactivated successfully. Oct 14 05:13:18 localhost podman[108155]: error opening file `/run/crun/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611/status`: No such file or directory Oct 14 05:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.timer: Failed to open /run/systemd/transient/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.timer: No such file or directory Oct 14 05:13:18 localhost systemd[1]: c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: Failed to open /run/systemd/transient/c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611.service: No such file or directory Oct 14 05:13:18 localhost podman[108144]: 2025-10-14 09:13:18.630781021 +0000 UTC m=+0.095954809 container cleanup c02d9ba88db78553138307fc24cdfd94af4733aef6829a00699794cd8c402611 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 14 05:13:18 localhost podman[108144]: collectd Oct 14 05:13:18 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Oct 14 05:13:18 localhost systemd[1]: Stopped collectd container. Oct 14 05:13:18 localhost podman[108157]: 2025-10-14 09:13:18.727710965 +0000 UTC m=+0.093018980 container health_status 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team) Oct 14 05:13:18 localhost podman[108157]: 2025-10-14 09:13:18.916169368 +0000 UTC m=+0.281477433 container exec_died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 14 05:13:18 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Deactivated successfully. Oct 14 05:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35370 SEQ=3745456705 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:19 localhost python3.9[108279]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:19 localhost systemd[1]: var-lib-containers-storage-overlay-7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e-merged.mount: Deactivated successfully. Oct 14 05:13:19 localhost systemd[1]: Reloading. Oct 14 05:13:19 localhost systemd-sysv-generator[108306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:13:19 localhost systemd-rc-local-generator[108302]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:13:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:13:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:13:19 localhost systemd[1]: Stopping iscsid container... Oct 14 05:13:19 localhost recover_tripleo_nova_virtqemud[108322]: 62532 Oct 14 05:13:19 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:13:19 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:13:19 localhost systemd[1]: tmp-crun.iy5GQK.mount: Deactivated successfully. Oct 14 05:13:19 localhost systemd[1]: libpod-df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.scope: Deactivated successfully. Oct 14 05:13:19 localhost systemd[1]: libpod-df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.scope: Consumed 1.262s CPU time. Oct 14 05:13:19 localhost podman[108321]: 2025-10-14 09:13:19.844099807 +0000 UTC m=+0.077174214 container died df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true) Oct 14 05:13:19 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.timer: Deactivated successfully. Oct 14 05:13:19 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a. Oct 14 05:13:19 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Failed to open /run/systemd/transient/df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: No such file or directory Oct 14 05:13:19 localhost podman[108321]: 2025-10-14 09:13:19.885488469 +0000 UTC m=+0.118562826 container cleanup df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, container_name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:13:19 localhost podman[108321]: iscsid Oct 14 05:13:19 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.timer: Failed to open /run/systemd/transient/df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.timer: No such file or directory Oct 14 05:13:19 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Failed to open /run/systemd/transient/df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: No such file or directory Oct 14 05:13:19 localhost podman[108334]: 2025-10-14 09:13:19.914641153 +0000 UTC m=+0.064351230 container cleanup df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, vcs-type=git, container_name=iscsid, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:13:19 localhost systemd[1]: libpod-conmon-df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.scope: Deactivated successfully. Oct 14 05:13:20 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.timer: Failed to open /run/systemd/transient/df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.timer: No such file or directory Oct 14 05:13:20 localhost systemd[1]: df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: Failed to open /run/systemd/transient/df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a.service: No such file or directory Oct 14 05:13:20 localhost podman[108347]: 2025-10-14 09:13:20.012761259 +0000 UTC m=+0.066791206 container cleanup df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1) Oct 14 05:13:20 localhost podman[108347]: iscsid Oct 14 05:13:20 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Oct 14 05:13:20 localhost systemd[1]: Stopped iscsid container. Oct 14 05:13:20 localhost systemd[1]: var-lib-containers-storage-overlay-09fa0ed2f8930991b55c20b14a15d726f2d078ff05272993cec0208c15a14da5-merged.mount: Deactivated successfully. Oct 14 05:13:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a-userdata-shm.mount: Deactivated successfully. Oct 14 05:13:20 localhost python3.9[108451]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:20 localhost systemd[1]: Reloading. Oct 14 05:13:21 localhost systemd-rc-local-generator[108477]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:13:21 localhost systemd-sysv-generator[108483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:13:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:13:21 localhost systemd[1]: Stopping logrotate_crond container... Oct 14 05:13:21 localhost systemd[1]: libpod-1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.scope: Deactivated successfully. Oct 14 05:13:21 localhost systemd[1]: libpod-1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.scope: Consumed 1.171s CPU time. Oct 14 05:13:21 localhost podman[108492]: 2025-10-14 09:13:21.382532449 +0000 UTC m=+0.084837341 container died 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron) Oct 14 05:13:21 localhost systemd[1]: tmp-crun.6H8e2r.mount: Deactivated successfully. Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.timer: Deactivated successfully. Oct 14 05:13:21 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6. Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Failed to open /run/systemd/transient/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: No such file or directory Oct 14 05:13:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6-userdata-shm.mount: Deactivated successfully. Oct 14 05:13:21 localhost systemd[1]: var-lib-containers-storage-overlay-fc06e989b61b0623172ed8f6228aeadb5ab4e2033fa5c722e42cb9029cc166b7-merged.mount: Deactivated successfully. Oct 14 05:13:21 localhost podman[108492]: 2025-10-14 09:13:21.439510309 +0000 UTC m=+0.141815181 container cleanup 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 14 05:13:21 localhost podman[108492]: logrotate_crond Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.timer: Failed to open /run/systemd/transient/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.timer: No such file or directory Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Failed to open /run/systemd/transient/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: No such file or directory Oct 14 05:13:21 localhost podman[108506]: 2025-10-14 09:13:21.483094399 +0000 UTC m=+0.091116818 container cleanup 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 14 05:13:21 localhost systemd[1]: libpod-conmon-1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.scope: Deactivated successfully. Oct 14 05:13:21 localhost podman[108536]: error opening file `/run/crun/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6/status`: No such file or directory Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.timer: Failed to open /run/systemd/transient/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.timer: No such file or directory Oct 14 05:13:21 localhost systemd[1]: 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: Failed to open /run/systemd/transient/1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6.service: No such file or directory Oct 14 05:13:21 localhost podman[108523]: 2025-10-14 09:13:21.600146294 +0000 UTC m=+0.080581286 container cleanup 1e9e971ac5d647c6ce71f88f303a4cc68af4ed922325d78ce20dd6504beed4f6 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, distribution-scope=public, release=1, name=rhosp17/openstack-cron, version=17.1.9, vcs-type=git, io.openshift.expose-services=, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Oct 14 05:13:21 localhost podman[108523]: logrotate_crond Oct 14 05:13:21 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Oct 14 05:13:21 localhost systemd[1]: Stopped logrotate_crond container. Oct 14 05:13:22 localhost python3.9[108630]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:22 localhost systemd[1]: Reloading. Oct 14 05:13:22 localhost systemd-rc-local-generator[108652]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:13:22 localhost systemd-sysv-generator[108656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:13:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35370 SEQ=3745456705 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:13:22 localhost systemd[1]: Stopping metrics_qdr container... Oct 14 05:13:22 localhost systemd[1]: tmp-crun.JJ4R4W.mount: Deactivated successfully. Oct 14 05:13:22 localhost kernel: qdrouterd[55128]: segfault at 0 ip 00007fc09ac2c7cb sp 00007ffeb411dcb0 error 4 in libc.so.6[7fc09abc9000+175000] Oct 14 05:13:22 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Oct 14 05:13:22 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Oct 14 05:13:22 localhost systemd[1]: Started Process Core Dump (PID 108685/UID 0). Oct 14 05:13:22 localhost systemd-coredump[108686]: Resource limits disable core dumping for process 55128 (qdrouterd). Oct 14 05:13:22 localhost systemd-coredump[108686]: Process 55128 (qdrouterd) of user 42465 dumped core. Oct 14 05:13:22 localhost systemd[1]: systemd-coredump@0-108685-0.service: Deactivated successfully. Oct 14 05:13:22 localhost podman[108671]: 2025-10-14 09:13:22.984191848 +0000 UTC m=+0.231496530 container died 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, config_id=tripleo_step1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, managed_by=tripleo_ansible) Oct 14 05:13:22 localhost systemd[1]: libpod-4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.scope: Deactivated successfully. Oct 14 05:13:22 localhost systemd[1]: libpod-4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.scope: Consumed 30.119s CPU time. Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.timer: Deactivated successfully. Oct 14 05:13:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402. Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Failed to open /run/systemd/transient/4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: No such file or directory Oct 14 05:13:23 localhost systemd[1]: tmp-crun.FmGbHG.mount: Deactivated successfully. Oct 14 05:13:23 localhost podman[108671]: 2025-10-14 09:13:23.036228236 +0000 UTC m=+0.283532948 container cleanup 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Oct 14 05:13:23 localhost podman[108671]: metrics_qdr Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.timer: Failed to open /run/systemd/transient/4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.timer: No such file or directory Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Failed to open /run/systemd/transient/4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: No such file or directory Oct 14 05:13:23 localhost podman[108690]: 2025-10-14 09:13:23.089407245 +0000 UTC m=+0.095263800 container cleanup 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 14 05:13:23 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Oct 14 05:13:23 localhost systemd[1]: libpod-conmon-4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.scope: Deactivated successfully. Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.timer: Failed to open /run/systemd/transient/4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.timer: No such file or directory Oct 14 05:13:23 localhost systemd[1]: 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: Failed to open /run/systemd/transient/4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402.service: No such file or directory Oct 14 05:13:23 localhost podman[108704]: 2025-10-14 09:13:23.187074599 +0000 UTC m=+0.070141866 container cleanup 4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b495038a864008964602910aa3c03fe1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 14 05:13:23 localhost podman[108704]: metrics_qdr Oct 14 05:13:23 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Oct 14 05:13:23 localhost systemd[1]: Stopped metrics_qdr container. Oct 14 05:13:23 localhost systemd[1]: var-lib-containers-storage-overlay-4cc5b6d664010750643235f3f70d195ea350655d57182e7e57ebfd557533d6a2-merged.mount: Deactivated successfully. Oct 14 05:13:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402-userdata-shm.mount: Deactivated successfully. Oct 14 05:13:24 localhost python3.9[108808]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:24 localhost python3.9[108901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64811 DF PROTO=TCP SPT=58956 DPT=9102 SEQ=1145570737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FFE2690000000001030307) Oct 14 05:13:25 localhost python3.9[108994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:26 localhost python3.9[109087]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:13:26 localhost systemd[1]: Reloading. Oct 14 05:13:26 localhost systemd-rc-local-generator[109106]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:13:26 localhost systemd-sysv-generator[109110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:13:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:13:26 localhost systemd[1]: Stopping nova_compute container... Oct 14 05:13:26 localhost systemd[1]: tmp-crun.Rsb98t.mount: Deactivated successfully. Oct 14 05:13:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35370 SEQ=3745456705 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61453 DF PROTO=TCP SPT=40240 DPT=9882 SEQ=2651784462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75FFF8FA0000000001030307) Oct 14 05:13:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42955 DF PROTO=TCP SPT=41212 DPT=9100 SEQ=3898077725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760004690000000001030307) Oct 14 05:13:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42956 DF PROTO=TCP SPT=41212 DPT=9100 SEQ=3898077725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600142A0000000001030307) Oct 14 05:13:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30086 DF PROTO=TCP SPT=44682 DPT=9105 SEQ=92954853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76001F690000000001030307) Oct 14 05:13:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:49:0d:95 MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=35370 SEQ=3745456705 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 14 05:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:13:45 localhost podman[109141]: 2025-10-14 09:13:45.06379696 +0000 UTC m=+0.098552838 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44) Oct 14 05:13:45 localhost podman[109143]: Error: container a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e is not running Oct 14 05:13:45 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Main process exited, code=exited, status=125/n/a Oct 14 05:13:45 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed with result 'exit-code'. Oct 14 05:13:45 localhost podman[109141]: 2025-10-14 09:13:45.111099081 +0000 UTC m=+0.145854879 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Oct 14 05:13:45 localhost podman[109141]: unhealthy Oct 14 05:13:45 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:45 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:13:45 localhost podman[109142]: 2025-10-14 09:13:45.165864262 +0000 UTC m=+0.200978170 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:13:45 localhost podman[109142]: 2025-10-14 09:13:45.206224247 +0000 UTC m=+0.241338155 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53) Oct 14 05:13:45 localhost podman[109142]: unhealthy Oct 14 05:13:45 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:13:45 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:13:47 localhost systemd[1]: tmp-crun.XVpKOf.mount: Deactivated successfully. Oct 14 05:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39207 DF PROTO=TCP SPT=52350 DPT=9101 SEQ=175088173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76003A290000000001030307) Oct 14 05:13:47 localhost podman[109191]: 2025-10-14 09:13:47.288282302 +0000 UTC m=+0.082930140 container health_status 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.33.12) Oct 14 05:13:47 localhost podman[109191]: 2025-10-14 09:13:47.742367721 +0000 UTC m=+0.537015529 container exec_died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:13:47 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Deactivated successfully. Oct 14 05:13:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39208 DF PROTO=TCP SPT=52350 DPT=9101 SEQ=175088173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760049E90000000001030307) Oct 14 05:13:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25228 DF PROTO=TCP SPT=33248 DPT=9102 SEQ=3101060792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600539E0000000001030307) Oct 14 05:13:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25229 DF PROTO=TCP SPT=33248 DPT=9102 SEQ=3101060792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760057A90000000001030307) Oct 14 05:14:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2582 DF PROTO=TCP SPT=51238 DPT=9100 SEQ=1539107038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76006D830000000001030307) Oct 14 05:14:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38427 DF PROTO=TCP SPT=59154 DPT=9882 SEQ=161788748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76006E290000000001030307) Oct 14 05:14:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2584 DF PROTO=TCP SPT=51238 DPT=9100 SEQ=1539107038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760079A90000000001030307) Oct 14 05:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2585 DF PROTO=TCP SPT=51238 DPT=9100 SEQ=1539107038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600896A0000000001030307) Oct 14 05:14:08 localhost podman[109128]: time="2025-10-14T09:14:08Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Oct 14 05:14:08 localhost systemd[1]: libpod-a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.scope: Deactivated successfully. Oct 14 05:14:08 localhost systemd[1]: libpod-a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.scope: Consumed 28.055s CPU time. Oct 14 05:14:08 localhost podman[109128]: 2025-10-14 09:14:08.875205405 +0000 UTC m=+42.115855916 container died a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true) Oct 14 05:14:08 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.timer: Deactivated successfully. Oct 14 05:14:08 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e. Oct 14 05:14:08 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed to open /run/systemd/transient/a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: No such file or directory Oct 14 05:14:08 localhost systemd[1]: var-lib-containers-storage-overlay-50bc9ba3f84039f07e25a57d0e85a4cd956846d0f86f31738331270568331766-merged.mount: Deactivated successfully. Oct 14 05:14:08 localhost podman[109128]: 2025-10-14 09:14:08.936701727 +0000 UTC m=+42.177352208 container cleanup a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, release=1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64) Oct 14 05:14:08 localhost podman[109128]: nova_compute Oct 14 05:14:09 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.timer: Failed to open /run/systemd/transient/a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.timer: No such file or directory Oct 14 05:14:09 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed to open /run/systemd/transient/a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: No such file or directory Oct 14 05:14:09 localhost podman[109257]: 2025-10-14 09:14:09.009923734 +0000 UTC m=+0.124110885 container cleanup a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step5, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute) Oct 14 05:14:09 localhost systemd[1]: libpod-conmon-a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.scope: Deactivated successfully. Oct 14 05:14:09 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.timer: Failed to open /run/systemd/transient/a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.timer: No such file or directory Oct 14 05:14:09 localhost systemd[1]: a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: Failed to open /run/systemd/transient/a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e.service: No such file or directory Oct 14 05:14:09 localhost podman[109277]: 2025-10-14 09:14:09.120918396 +0000 UTC m=+0.079384134 container cleanup a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:14:09 localhost podman[109277]: nova_compute Oct 14 05:14:09 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Oct 14 05:14:09 localhost systemd[1]: Stopped nova_compute container. Oct 14 05:14:09 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.241s CPU time, no IO. Oct 14 05:14:09 localhost python3.9[109391]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:14:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40037 DF PROTO=TCP SPT=51898 DPT=9105 SEQ=3572919633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760094AA0000000001030307) Oct 14 05:14:11 localhost systemd[1]: Reloading. Oct 14 05:14:11 localhost systemd-sysv-generator[109423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:14:11 localhost systemd-rc-local-generator[109420]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:14:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:14:11 localhost systemd[1]: Stopping nova_migration_target container... Oct 14 05:14:11 localhost systemd[1]: libpod-5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.scope: Deactivated successfully. Oct 14 05:14:11 localhost systemd[1]: libpod-5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.scope: Consumed 35.036s CPU time. Oct 14 05:14:11 localhost podman[109432]: 2025-10-14 09:14:11.475590055 +0000 UTC m=+0.081665974 container died 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, architecture=x86_64, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.timer: Deactivated successfully. Oct 14 05:14:11 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537. Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Failed to open /run/systemd/transient/5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: No such file or directory Oct 14 05:14:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537-userdata-shm.mount: Deactivated successfully. Oct 14 05:14:11 localhost podman[109432]: 2025-10-14 09:14:11.526294608 +0000 UTC m=+0.132370497 container cleanup 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:14:11 localhost podman[109432]: nova_migration_target Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.timer: Failed to open /run/systemd/transient/5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.timer: No such file or directory Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Failed to open /run/systemd/transient/5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: No such file or directory Oct 14 05:14:11 localhost podman[109445]: 2025-10-14 09:14:11.560782984 +0000 UTC m=+0.074436110 container cleanup 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, release=1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public) Oct 14 05:14:11 localhost systemd[1]: libpod-conmon-5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.scope: Deactivated successfully. Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.timer: Failed to open /run/systemd/transient/5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.timer: No such file or directory Oct 14 05:14:11 localhost systemd[1]: 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: Failed to open /run/systemd/transient/5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537.service: No such file or directory Oct 14 05:14:11 localhost podman[109459]: 2025-10-14 09:14:11.64919835 +0000 UTC m=+0.052828091 container cleanup 5620c32330efbc51d2f27c0cd16da4c5900dc7348b4dd6cab9a0214de5e5a537 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true) Oct 14 05:14:11 localhost podman[109459]: nova_migration_target Oct 14 05:14:11 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Oct 14 05:14:11 localhost systemd[1]: Stopped nova_migration_target container. Oct 14 05:14:12 localhost systemd[1]: var-lib-containers-storage-overlay-4021d20142192293b753d5aa3904830cf887c958e51a03d916a4726fdc448e46-merged.mount: Deactivated successfully. Oct 14 05:14:12 localhost python3.9[109561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:14:12 localhost systemd[1]: Reloading. Oct 14 05:14:12 localhost systemd-rc-local-generator[109600]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:14:12 localhost systemd-sysv-generator[109604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:14:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:14:12 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Oct 14 05:14:12 localhost podman[109617]: 2025-10-14 09:14:12.931591222 +0000 UTC m=+0.073071094 container stop decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, container_name=nova_virtlogd_wrapper, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Oct 14 05:14:12 localhost systemd[1]: libpod-decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006.scope: Deactivated successfully. Oct 14 05:14:12 localhost podman[109617]: 2025-10-14 09:14:12.961291049 +0000 UTC m=+0.102770871 container died decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, tcib_managed=true, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 05:14:12 localhost systemd[1]: tmp-crun.7WyVul.mount: Deactivated successfully. Oct 14 05:14:13 localhost podman[109617]: 2025-10-14 09:14:13.006373671 +0000 UTC m=+0.147853503 container cleanup decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, release=2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git) Oct 14 05:14:13 localhost podman[109617]: nova_virtlogd_wrapper Oct 14 05:14:13 localhost podman[109630]: 2025-10-14 09:14:13.068712206 +0000 UTC m=+0.122727028 container cleanup decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vcs-type=git, container_name=nova_virtlogd_wrapper, release=2, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 05:14:13 localhost systemd[1]: var-lib-containers-storage-overlay-f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b-merged.mount: Deactivated successfully. Oct 14 05:14:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006-userdata-shm.mount: Deactivated successfully. Oct 14 05:14:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33255 DF PROTO=TCP SPT=45500 DPT=9101 SEQ=2244916058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600A3630000000001030307) Oct 14 05:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:14:15 localhost podman[109647]: 2025-10-14 09:14:15.544698295 +0000 UTC m=+0.083366470 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=) Oct 14 05:14:15 localhost systemd[1]: tmp-crun.Rotp1K.mount: Deactivated successfully. Oct 14 05:14:15 localhost podman[109647]: 2025-10-14 09:14:15.597140394 +0000 UTC m=+0.135808539 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:14:15 localhost podman[109647]: unhealthy Oct 14 05:14:15 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:14:15 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:14:15 localhost podman[109648]: 2025-10-14 09:14:15.599003314 +0000 UTC m=+0.135555492 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:14:15 localhost podman[109648]: 2025-10-14 09:14:15.679096526 +0000 UTC m=+0.215648684 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, release=1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.openshift.expose-services=) Oct 14 05:14:15 localhost podman[109648]: unhealthy Oct 14 05:14:15 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:14:15 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33257 DF PROTO=TCP SPT=45500 DPT=9101 SEQ=2244916058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600AF690000000001030307) Oct 14 05:14:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33258 DF PROTO=TCP SPT=45500 DPT=9101 SEQ=2244916058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600BF290000000001030307) Oct 14 05:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65390 DF PROTO=TCP SPT=60296 DPT=9102 SEQ=185751395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600C8CE0000000001030307) Oct 14 05:14:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65391 DF PROTO=TCP SPT=60296 DPT=9102 SEQ=185751395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600CCE90000000001030307) Oct 14 05:14:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13392 DF PROTO=TCP SPT=58344 DPT=9100 SEQ=1196746539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600E2B30000000001030307) Oct 14 05:14:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16858 DF PROTO=TCP SPT=51998 DPT=9882 SEQ=1876090919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600E35A0000000001030307) Oct 14 05:14:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13394 DF PROTO=TCP SPT=58344 DPT=9100 SEQ=1196746539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600EEA90000000001030307) Oct 14 05:14:34 localhost sshd[109688]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:14:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13395 DF PROTO=TCP SPT=58344 DPT=9100 SEQ=1196746539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7600FE6A0000000001030307) Oct 14 05:14:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26378 DF PROTO=TCP SPT=54948 DPT=9105 SEQ=3945991567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760109A90000000001030307) Oct 14 05:14:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57025 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=3499983233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760118930000000001030307) Oct 14 05:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:14:46 localhost podman[109690]: 2025-10-14 09:14:46.0472897 +0000 UTC m=+0.084382488 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, tcib_managed=true, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, architecture=x86_64) Oct 14 05:14:46 localhost podman[109691]: 2025-10-14 09:14:46.098372912 +0000 UTC m=+0.130255580 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 14 05:14:46 localhost podman[109690]: 2025-10-14 09:14:46.11803673 +0000 UTC m=+0.155129558 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:14:46 localhost podman[109690]: unhealthy Oct 14 05:14:46 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:14:46 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:14:46 localhost podman[109691]: 2025-10-14 09:14:46.139103606 +0000 UTC m=+0.170986314 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:14:46 localhost podman[109691]: unhealthy Oct 14 05:14:46 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:14:46 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57027 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=3499983233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760124A90000000001030307) Oct 14 05:14:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:14:48 localhost recover_tripleo_nova_virtqemud[109728]: 62532 Oct 14 05:14:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:14:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:14:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57028 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=3499983233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760134690000000001030307) Oct 14 05:14:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49507 DF PROTO=TCP SPT=35442 DPT=9102 SEQ=2428707706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76013DFD0000000001030307) Oct 14 05:14:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49508 DF PROTO=TCP SPT=35442 DPT=9102 SEQ=2428707706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760141E90000000001030307) Oct 14 05:15:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6325 DF PROTO=TCP SPT=54408 DPT=9100 SEQ=3592004860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760157E40000000001030307) Oct 14 05:15:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60714 DF PROTO=TCP SPT=49528 DPT=9882 SEQ=3665323801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760158890000000001030307) Oct 14 05:15:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6327 DF PROTO=TCP SPT=54408 DPT=9100 SEQ=3592004860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760163EA0000000001030307) Oct 14 05:15:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 14 05:15:05 localhost recover_tripleo_nova_virtqemud[109730]: 62532 Oct 14 05:15:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 14 05:15:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 14 05:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6328 DF PROTO=TCP SPT=54408 DPT=9100 SEQ=3592004860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760173A90000000001030307) Oct 14 05:15:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38887 DF PROTO=TCP SPT=47074 DPT=9105 SEQ=3074235449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76017EE90000000001030307) Oct 14 05:15:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29383 DF PROTO=TCP SPT=47916 DPT=9101 SEQ=662494562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76018DC30000000001030307) Oct 14 05:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:15:16 localhost systemd[1]: tmp-crun.laMATX.mount: Deactivated successfully. Oct 14 05:15:16 localhost podman[109859]: 2025-10-14 09:15:16.555974053 +0000 UTC m=+0.093468642 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_controller, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 14 05:15:16 localhost podman[109859]: 2025-10-14 09:15:16.578106587 +0000 UTC m=+0.115601226 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4) Oct 14 05:15:16 localhost podman[109859]: unhealthy Oct 14 05:15:16 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:15:16 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:15:16 localhost podman[109860]: 2025-10-14 09:15:16.592945526 +0000 UTC m=+0.128893354 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:15:16 localhost podman[109860]: 2025-10-14 09:15:16.67611516 +0000 UTC m=+0.212062988 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, vcs-type=git, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:15:16 localhost podman[109860]: unhealthy Oct 14 05:15:16 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:15:16 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29385 DF PROTO=TCP SPT=47916 DPT=9101 SEQ=662494562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760199EA0000000001030307) Oct 14 05:15:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29386 DF PROTO=TCP SPT=47916 DPT=9101 SEQ=662494562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601A9A90000000001030307) Oct 14 05:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7131 DF PROTO=TCP SPT=52512 DPT=9102 SEQ=361120283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601B32E0000000001030307) Oct 14 05:15:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7132 DF PROTO=TCP SPT=52512 DPT=9102 SEQ=361120283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601B72A0000000001030307) Oct 14 05:15:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60495 DF PROTO=TCP SPT=47838 DPT=9100 SEQ=590385716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601CD140000000001030307) Oct 14 05:15:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11261 DF PROTO=TCP SPT=42668 DPT=9882 SEQ=228104170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601CDB90000000001030307) Oct 14 05:15:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60497 DF PROTO=TCP SPT=47838 DPT=9100 SEQ=590385716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601D9290000000001030307) Oct 14 05:15:37 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Oct 14 05:15:37 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61736 (conmon) with signal SIGKILL. Oct 14 05:15:37 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Oct 14 05:15:37 localhost systemd[1]: libpod-conmon-decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006.scope: Deactivated successfully. Oct 14 05:15:37 localhost podman[109907]: error opening file `/run/crun/decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006/status`: No such file or directory Oct 14 05:15:37 localhost podman[109896]: 2025-10-14 09:15:37.284103223 +0000 UTC m=+0.070498325 container cleanup decaf7e30bf2d14321804af2dbbca94d25f6ce358a15e73d4489f01e7c485006 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=2, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 05:15:37 localhost podman[109896]: nova_virtlogd_wrapper Oct 14 05:15:37 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Oct 14 05:15:37 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Oct 14 05:15:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60498 DF PROTO=TCP SPT=47838 DPT=9100 SEQ=590385716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601E8E90000000001030307) Oct 14 05:15:38 localhost python3.9[110000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:38 localhost systemd[1]: Reloading. Oct 14 05:15:38 localhost systemd-rc-local-generator[110029]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:38 localhost systemd-sysv-generator[110033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:38 localhost systemd[1]: Stopping nova_virtnodedevd container... Oct 14 05:15:38 localhost systemd[1]: libpod-30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed.scope: Deactivated successfully. Oct 14 05:15:38 localhost systemd[1]: libpod-30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed.scope: Consumed 1.482s CPU time. Oct 14 05:15:38 localhost podman[110041]: 2025-10-14 09:15:38.610855016 +0000 UTC m=+0.082667581 container died 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, release=2, tcib_managed=true, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtnodedevd, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible) Oct 14 05:15:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:38 localhost podman[110041]: 2025-10-14 09:15:38.653443651 +0000 UTC m=+0.125256206 container cleanup 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, release=2, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1) Oct 14 05:15:38 localhost podman[110041]: nova_virtnodedevd Oct 14 05:15:38 localhost podman[110056]: 2025-10-14 09:15:38.709460736 +0000 UTC m=+0.080607017 container cleanup 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, tcib_managed=true, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}) Oct 14 05:15:38 localhost systemd[1]: libpod-conmon-30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed.scope: Deactivated successfully. Oct 14 05:15:38 localhost podman[110082]: error opening file `/run/crun/30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed/status`: No such file or directory Oct 14 05:15:38 localhost podman[110071]: 2025-10-14 09:15:38.811914998 +0000 UTC m=+0.074880933 container cleanup 30440b7f453f060d3c923d095b4176ad8af6c2dc4126a7c76348b7c33de0f4ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9) Oct 14 05:15:38 localhost podman[110071]: nova_virtnodedevd Oct 14 05:15:38 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Oct 14 05:15:38 localhost systemd[1]: Stopped nova_virtnodedevd container. Oct 14 05:15:39 localhost systemd[1]: var-lib-containers-storage-overlay-e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323-merged.mount: Deactivated successfully. Oct 14 05:15:39 localhost python3.9[110175]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:39 localhost systemd[1]: Reloading. Oct 14 05:15:39 localhost systemd-sysv-generator[110205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:39 localhost systemd-rc-local-generator[110200]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:40 localhost systemd[1]: Stopping nova_virtproxyd container... Oct 14 05:15:40 localhost systemd[1]: libpod-2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd.scope: Deactivated successfully. Oct 14 05:15:40 localhost podman[110216]: 2025-10-14 09:15:40.152060422 +0000 UTC m=+0.086555617 container died 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtproxyd, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 05:15:40 localhost podman[110216]: 2025-10-14 09:15:40.190597326 +0000 UTC m=+0.125092491 container cleanup 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, distribution-scope=public, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true) Oct 14 05:15:40 localhost podman[110216]: nova_virtproxyd Oct 14 05:15:40 localhost podman[110229]: 2025-10-14 09:15:40.224695363 +0000 UTC m=+0.056066687 container cleanup 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20250721.1, version=17.1.9, container_name=nova_virtproxyd, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, release=2, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible) Oct 14 05:15:40 localhost systemd[1]: libpod-conmon-2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd.scope: Deactivated successfully. Oct 14 05:15:40 localhost podman[110257]: error opening file `/run/crun/2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd/status`: No such file or directory Oct 14 05:15:40 localhost podman[110244]: 2025-10-14 09:15:40.331439841 +0000 UTC m=+0.072251462 container cleanup 2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, container_name=nova_virtproxyd, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, release=2, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=) Oct 14 05:15:40 localhost podman[110244]: nova_virtproxyd Oct 14 05:15:40 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Oct 14 05:15:40 localhost systemd[1]: Stopped nova_virtproxyd container. Oct 14 05:15:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53644 DF PROTO=TCP SPT=40368 DPT=9105 SEQ=2274677298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7601F42A0000000001030307) Oct 14 05:15:40 localhost systemd[1]: var-lib-containers-storage-overlay-fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2-merged.mount: Deactivated successfully. Oct 14 05:15:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ead2a7bb377ea3bf48947157603e1a4e11433024d5f3f9a770f85fe4442becd-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:41 localhost python3.9[110350]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:42 localhost systemd[1]: Reloading. Oct 14 05:15:42 localhost systemd-sysv-generator[110381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:42 localhost systemd-rc-local-generator[110377]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Oct 14 05:15:42 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Oct 14 05:15:42 localhost systemd[1]: Stopping nova_virtqemud container... Oct 14 05:15:42 localhost systemd[1]: libpod-005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893.scope: Deactivated successfully. Oct 14 05:15:42 localhost systemd[1]: libpod-005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893.scope: Consumed 2.392s CPU time. Oct 14 05:15:42 localhost podman[110391]: 2025-10-14 09:15:42.625285815 +0000 UTC m=+0.090687967 container stop 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, tcib_managed=true, version=17.1.9, container_name=nova_virtqemud, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 14 05:15:42 localhost podman[110391]: 2025-10-14 09:15:42.659947027 +0000 UTC m=+0.125349179 container died 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 05:15:42 localhost systemd[1]: tmp-crun.tiEFsU.mount: Deactivated successfully. Oct 14 05:15:42 localhost podman[110391]: 2025-10-14 09:15:42.696873129 +0000 UTC m=+0.162275231 container cleanup 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_virtqemud, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 05:15:42 localhost podman[110391]: nova_virtqemud Oct 14 05:15:42 localhost podman[110405]: 2025-10-14 09:15:42.708474661 +0000 UTC m=+0.069113548 container cleanup 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59) Oct 14 05:15:42 localhost systemd[1]: libpod-conmon-005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893.scope: Deactivated successfully. Oct 14 05:15:42 localhost podman[110432]: error opening file `/run/crun/005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893/status`: No such file or directory Oct 14 05:15:42 localhost podman[110420]: 2025-10-14 09:15:42.826771708 +0000 UTC m=+0.080132553 container cleanup 005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=2, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=, container_name=nova_virtqemud, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc.) Oct 14 05:15:42 localhost podman[110420]: nova_virtqemud Oct 14 05:15:42 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Oct 14 05:15:42 localhost systemd[1]: Stopped nova_virtqemud container. Oct 14 05:15:43 localhost systemd[1]: var-lib-containers-storage-overlay-e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd-merged.mount: Deactivated successfully. Oct 14 05:15:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-005cf667a2bec73d2965b2cc200f62ca57f639f0ee2af6aae6f4f28ebad85893-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:43 localhost python3.9[110525]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26192 DF PROTO=TCP SPT=46420 DPT=9101 SEQ=1145931591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760202F30000000001030307) Oct 14 05:15:44 localhost systemd[1]: Reloading. Oct 14 05:15:44 localhost systemd-rc-local-generator[110551]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:44 localhost systemd-sysv-generator[110556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:45 localhost python3.9[110655]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:15:46 localhost systemd[1]: Reloading. Oct 14 05:15:47 localhost podman[110658]: 2025-10-14 09:15:47.044597353 +0000 UTC m=+0.082730634 container health_status 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, distribution-scope=public, release=1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:15:47 localhost podman[110658]: 2025-10-14 09:15:47.113065852 +0000 UTC m=+0.151199133 container exec_died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, distribution-scope=public, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, build-date=2025-07-21T13:28:44) Oct 14 05:15:47 localhost systemd-rc-local-generator[110712]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:47 localhost systemd-sysv-generator[110716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:47 localhost podman[110658]: unhealthy Oct 14 05:15:47 localhost podman[110659]: 2025-10-14 09:15:47.131654572 +0000 UTC m=+0.170241695 container health_status 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible) Oct 14 05:15:47 localhost podman[110659]: 2025-10-14 09:15:47.146965064 +0000 UTC m=+0.185552197 container exec_died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 14 05:15:47 localhost podman[110659]: unhealthy Oct 14 05:15:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:47 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:15:47 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed with result 'exit-code'. Oct 14 05:15:47 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:15:47 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed with result 'exit-code'. Oct 14 05:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26194 DF PROTO=TCP SPT=46420 DPT=9101 SEQ=1145931591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76020EEA0000000001030307) Oct 14 05:15:47 localhost systemd[1]: Stopping nova_virtsecretd container... Oct 14 05:15:47 localhost systemd[1]: libpod-b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433.scope: Deactivated successfully. Oct 14 05:15:47 localhost podman[110735]: 2025-10-14 09:15:47.409778294 +0000 UTC m=+0.085687753 container died b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.9, vcs-type=git, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtsecretd) Oct 14 05:15:47 localhost podman[110735]: 2025-10-14 09:15:47.448950566 +0000 UTC m=+0.124859975 container cleanup b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.9, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, release=2, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 14 05:15:47 localhost podman[110735]: nova_virtsecretd Oct 14 05:15:47 localhost podman[110749]: 2025-10-14 09:15:47.497766388 +0000 UTC m=+0.070552676 container cleanup b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:15:47 localhost systemd[1]: libpod-conmon-b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433.scope: Deactivated successfully. Oct 14 05:15:47 localhost podman[110775]: error opening file `/run/crun/b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433/status`: No such file or directory Oct 14 05:15:47 localhost podman[110764]: 2025-10-14 09:15:47.600677593 +0000 UTC m=+0.070502226 container cleanup b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 14 05:15:47 localhost podman[110764]: nova_virtsecretd Oct 14 05:15:47 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Oct 14 05:15:47 localhost systemd[1]: Stopped nova_virtsecretd container. Oct 14 05:15:48 localhost systemd[1]: var-lib-containers-storage-overlay-5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde-merged.mount: Deactivated successfully. Oct 14 05:15:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:48 localhost python3.9[110868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:48 localhost systemd[1]: Reloading. Oct 14 05:15:48 localhost systemd-rc-local-generator[110893]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:48 localhost systemd-sysv-generator[110899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:48 localhost systemd[1]: Stopping nova_virtstoraged container... Oct 14 05:15:48 localhost systemd[1]: tmp-crun.eQvlyV.mount: Deactivated successfully. Oct 14 05:15:48 localhost systemd[1]: libpod-5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4.scope: Deactivated successfully. Oct 14 05:15:48 localhost podman[110908]: 2025-10-14 09:15:48.893974507 +0000 UTC m=+0.082188658 container died 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., container_name=nova_virtstoraged, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:15:48 localhost podman[110908]: 2025-10-14 09:15:48.932629506 +0000 UTC m=+0.120843627 container cleanup 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public) Oct 14 05:15:48 localhost podman[110908]: nova_virtstoraged Oct 14 05:15:48 localhost podman[110923]: 2025-10-14 09:15:48.976194557 +0000 UTC m=+0.070508936 container cleanup 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtstoraged, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 14 05:15:48 localhost systemd[1]: libpod-conmon-5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4.scope: Deactivated successfully. Oct 14 05:15:49 localhost podman[110951]: error opening file `/run/crun/5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4/status`: No such file or directory Oct 14 05:15:49 localhost podman[110940]: 2025-10-14 09:15:49.086080249 +0000 UTC m=+0.074105803 container cleanup 5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f5be0e0347f8a081fe8927c6f95950cc'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, container_name=nova_virtstoraged, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, release=2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, distribution-scope=public, version=17.1.9) Oct 14 05:15:49 localhost podman[110940]: nova_virtstoraged Oct 14 05:15:49 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Oct 14 05:15:49 localhost systemd[1]: Stopped nova_virtstoraged container. Oct 14 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385-merged.mount: Deactivated successfully. Oct 14 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:49 localhost python3.9[111046]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:49 localhost systemd[1]: Reloading. Oct 14 05:15:50 localhost systemd-rc-local-generator[111072]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:50 localhost systemd-sysv-generator[111075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:50 localhost systemd[1]: Stopping ovn_controller container... Oct 14 05:15:50 localhost systemd[1]: libpod-403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.scope: Deactivated successfully. Oct 14 05:15:50 localhost systemd[1]: libpod-403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.scope: Consumed 2.732s CPU time. Oct 14 05:15:50 localhost podman[111087]: 2025-10-14 09:15:50.328955169 +0000 UTC m=+0.082489147 container died 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, version=17.1.9) Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.timer: Deactivated successfully. Oct 14 05:15:50 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17. Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed to open /run/systemd/transient/403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: No such file or directory Oct 14 05:15:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-ef2659ef36954d83ebad031f4d14eeae08e60b1f17aa34c32cb449aad821b207-merged.mount: Deactivated successfully. Oct 14 05:15:50 localhost podman[111087]: 2025-10-14 09:15:50.373555557 +0000 UTC m=+0.127089505 container cleanup 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, container_name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Oct 14 05:15:50 localhost podman[111087]: ovn_controller Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.timer: Failed to open /run/systemd/transient/403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.timer: No such file or directory Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed to open /run/systemd/transient/403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: No such file or directory Oct 14 05:15:50 localhost podman[111100]: 2025-10-14 09:15:50.426459488 +0000 UTC m=+0.082807715 container cleanup 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 14 05:15:50 localhost systemd[1]: libpod-conmon-403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.scope: Deactivated successfully. Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.timer: Failed to open /run/systemd/transient/403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.timer: No such file or directory Oct 14 05:15:50 localhost systemd[1]: 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: Failed to open /run/systemd/transient/403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17.service: No such file or directory Oct 14 05:15:50 localhost podman[111114]: 2025-10-14 09:15:50.529349233 +0000 UTC m=+0.073918937 container cleanup 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, version=17.1.9) Oct 14 05:15:50 localhost podman[111114]: ovn_controller Oct 14 05:15:50 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Oct 14 05:15:50 localhost systemd[1]: Stopped ovn_controller container. Oct 14 05:15:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26195 DF PROTO=TCP SPT=46420 DPT=9101 SEQ=1145931591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76021EA90000000001030307) Oct 14 05:15:51 localhost python3.9[111219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:51 localhost systemd[1]: Reloading. Oct 14 05:15:51 localhost systemd-sysv-generator[111247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:51 localhost systemd-rc-local-generator[111243]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:51 localhost systemd[1]: Stopping ovn_metadata_agent container... Oct 14 05:15:51 localhost systemd[1]: tmp-crun.gs6N9h.mount: Deactivated successfully. Oct 14 05:15:52 localhost systemd[1]: libpod-9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.scope: Deactivated successfully. Oct 14 05:15:52 localhost systemd[1]: libpod-9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.scope: Consumed 9.923s CPU time. Oct 14 05:15:52 localhost podman[111260]: 2025-10-14 09:15:52.353589272 +0000 UTC m=+0.555746152 container died 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:28:53, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.timer: Deactivated successfully. Oct 14 05:15:52 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c. Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed to open /run/systemd/transient/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: No such file or directory Oct 14 05:15:52 localhost podman[111260]: 2025-10-14 09:15:52.472938649 +0000 UTC m=+0.675095499 container cleanup 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 14 05:15:52 localhost podman[111260]: ovn_metadata_agent Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.timer: Failed to open /run/systemd/transient/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.timer: No such file or directory Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed to open /run/systemd/transient/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: No such file or directory Oct 14 05:15:52 localhost podman[111273]: 2025-10-14 09:15:52.497612831 +0000 UTC m=+0.131502283 container cleanup 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 14 05:15:52 localhost systemd[1]: libpod-conmon-9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.scope: Deactivated successfully. Oct 14 05:15:52 localhost podman[111302]: error opening file `/run/crun/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c/status`: No such file or directory Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.timer: Failed to open /run/systemd/transient/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.timer: No such file or directory Oct 14 05:15:52 localhost systemd[1]: 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: Failed to open /run/systemd/transient/9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c.service: No such file or directory Oct 14 05:15:52 localhost podman[111289]: 2025-10-14 09:15:52.614306596 +0000 UTC m=+0.079925798 container cleanup 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 14 05:15:52 localhost podman[111289]: ovn_metadata_agent Oct 14 05:15:52 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Oct 14 05:15:52 localhost systemd[1]: Stopped ovn_metadata_agent container. Oct 14 05:15:52 localhost systemd[1]: var-lib-containers-storage-overlay-15a786747d6feeb3f247951c727a866692741e8c0e2a628920395caa23adc45e-merged.mount: Deactivated successfully. Oct 14 05:15:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c-userdata-shm.mount: Deactivated successfully. Oct 14 05:15:53 localhost python3.9[111395]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:15:53 localhost systemd[1]: Reloading. Oct 14 05:15:53 localhost systemd-rc-local-generator[111425]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:15:53 localhost systemd-sysv-generator[111428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:15:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:15:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10689 DF PROTO=TCP SPT=56764 DPT=9102 SEQ=761182363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602285E0000000001030307) Oct 14 05:15:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10690 DF PROTO=TCP SPT=56764 DPT=9102 SEQ=761182363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76022C6A0000000001030307) Oct 14 05:16:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49822 DF PROTO=TCP SPT=56484 DPT=9100 SEQ=2862918781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760242440000000001030307) Oct 14 05:16:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1082 DF PROTO=TCP SPT=51970 DPT=9882 SEQ=2225198042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760242EA0000000001030307) Oct 14 05:16:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49824 DF PROTO=TCP SPT=56484 DPT=9100 SEQ=2862918781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76024E690000000001030307) Oct 14 05:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49825 DF PROTO=TCP SPT=56484 DPT=9100 SEQ=2862918781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76025E290000000001030307) Oct 14 05:16:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57462 DF PROTO=TCP SPT=57504 DPT=9105 SEQ=2013653960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760269690000000001030307) Oct 14 05:16:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19900 DF PROTO=TCP SPT=60922 DPT=9101 SEQ=3858779999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760278220000000001030307) Oct 14 05:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19902 DF PROTO=TCP SPT=60922 DPT=9101 SEQ=3858779999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760284290000000001030307) Oct 14 05:16:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19903 DF PROTO=TCP SPT=60922 DPT=9101 SEQ=3858779999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760293EA0000000001030307) Oct 14 05:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20627 DF PROTO=TCP SPT=36886 DPT=9102 SEQ=2005458857 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76029D8E0000000001030307) Oct 14 05:16:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20628 DF PROTO=TCP SPT=36886 DPT=9102 SEQ=2005458857 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602A1A90000000001030307) Oct 14 05:16:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45497 DF PROTO=TCP SPT=42432 DPT=9100 SEQ=1442606863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602B7730000000001030307) Oct 14 05:16:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48383 DF PROTO=TCP SPT=37284 DPT=9882 SEQ=2016680897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602B81A0000000001030307) Oct 14 05:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45499 DF PROTO=TCP SPT=42432 DPT=9100 SEQ=1442606863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602C3690000000001030307) Oct 14 05:16:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32722 DF PROTO=TCP SPT=33352 DPT=9105 SEQ=865970883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602CEEA0000000001030307) Oct 14 05:16:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32723 DF PROTO=TCP SPT=33352 DPT=9105 SEQ=865970883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602DEA90000000001030307) Oct 14 05:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39936 DF PROTO=TCP SPT=43226 DPT=9101 SEQ=133971578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602ED530000000001030307) Oct 14 05:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39938 DF PROTO=TCP SPT=43226 DPT=9101 SEQ=133971578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7602F9690000000001030307) Oct 14 05:16:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39939 DF PROTO=TCP SPT=43226 DPT=9101 SEQ=133971578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760309290000000001030307) Oct 14 05:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55413 DF PROTO=TCP SPT=57328 DPT=9102 SEQ=1416006933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760312BE0000000001030307) Oct 14 05:16:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55414 DF PROTO=TCP SPT=57328 DPT=9102 SEQ=1416006933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760316AA0000000001030307) Oct 14 05:17:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13031 DF PROTO=TCP SPT=56594 DPT=9100 SEQ=758155245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76032CA40000000001030307) Oct 14 05:17:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4306 DF PROTO=TCP SPT=35838 DPT=9882 SEQ=3522989179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76032D490000000001030307) Oct 14 05:17:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13033 DF PROTO=TCP SPT=56594 DPT=9100 SEQ=758155245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760338A90000000001030307) Oct 14 05:17:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13034 DF PROTO=TCP SPT=56594 DPT=9100 SEQ=758155245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760348690000000001030307) Oct 14 05:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50721 DF PROTO=TCP SPT=47856 DPT=9105 SEQ=1100372747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760353AA0000000001030307) Oct 14 05:17:13 localhost systemd[1]: session-36.scope: Deactivated successfully. Oct 14 05:17:13 localhost systemd[1]: session-36.scope: Consumed 19.616s CPU time. Oct 14 05:17:13 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Oct 14 05:17:13 localhost systemd-logind[760]: Removed session 36. Oct 14 05:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12323 DF PROTO=TCP SPT=56068 DPT=9101 SEQ=2497948798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760362830000000001030307) Oct 14 05:17:17 localhost podman[111629]: 2025-10-14 09:17:16.99963795 +0000 UTC m=+0.088740765 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 05:17:17 localhost podman[111629]: 2025-10-14 09:17:17.09230729 +0000 UTC m=+0.181409915 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 05:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12325 DF PROTO=TCP SPT=56068 DPT=9101 SEQ=2497948798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76036EAA0000000001030307) Oct 14 05:17:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12326 DF PROTO=TCP SPT=56068 DPT=9101 SEQ=2497948798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76037E690000000001030307) Oct 14 05:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15838 DF PROTO=TCP SPT=41936 DPT=9102 SEQ=2021661474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760387EE0000000001030307) Oct 14 05:17:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15839 DF PROTO=TCP SPT=41936 DPT=9102 SEQ=2021661474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76038BE90000000001030307) Oct 14 05:17:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21897 DF PROTO=TCP SPT=45626 DPT=9100 SEQ=2743146509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603A1D30000000001030307) Oct 14 05:17:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15683 DF PROTO=TCP SPT=55472 DPT=9882 SEQ=3249764560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603A27B0000000001030307) Oct 14 05:17:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21899 DF PROTO=TCP SPT=45626 DPT=9100 SEQ=2743146509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603ADE90000000001030307) Oct 14 05:17:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21900 DF PROTO=TCP SPT=45626 DPT=9100 SEQ=2743146509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603BDAA0000000001030307) Oct 14 05:17:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27440 DF PROTO=TCP SPT=53614 DPT=9105 SEQ=3686825720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603C8E90000000001030307) Oct 14 05:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7367 DF PROTO=TCP SPT=55264 DPT=9101 SEQ=2952627544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603D7B30000000001030307) Oct 14 05:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7369 DF PROTO=TCP SPT=55264 DPT=9101 SEQ=2952627544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603E3A90000000001030307) Oct 14 05:17:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7370 DF PROTO=TCP SPT=55264 DPT=9101 SEQ=2952627544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603F3690000000001030307) Oct 14 05:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59044 DF PROTO=TCP SPT=52954 DPT=9102 SEQ=2140618851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7603FD1E0000000001030307) Oct 14 05:17:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59045 DF PROTO=TCP SPT=52954 DPT=9102 SEQ=2140618851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604012A0000000001030307) Oct 14 05:18:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42329 DF PROTO=TCP SPT=41232 DPT=9100 SEQ=1694353379 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760417050000000001030307) Oct 14 05:18:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42241 DF PROTO=TCP SPT=33948 DPT=9882 SEQ=2557388606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760417A90000000001030307) Oct 14 05:18:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42331 DF PROTO=TCP SPT=41232 DPT=9100 SEQ=1694353379 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760423290000000001030307) Oct 14 05:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42332 DF PROTO=TCP SPT=41232 DPT=9100 SEQ=1694353379 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760432E90000000001030307) Oct 14 05:18:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2900 DF PROTO=TCP SPT=60984 DPT=9105 SEQ=637887193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76043E2A0000000001030307) Oct 14 05:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37896 DF PROTO=TCP SPT=32836 DPT=9101 SEQ=335099455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76044CE30000000001030307) Oct 14 05:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37898 DF PROTO=TCP SPT=32836 DPT=9101 SEQ=335099455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760458E90000000001030307) Oct 14 05:18:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37899 DF PROTO=TCP SPT=32836 DPT=9101 SEQ=335099455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760468A90000000001030307) Oct 14 05:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46387 DF PROTO=TCP SPT=37136 DPT=9102 SEQ=622142234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604724E0000000001030307) Oct 14 05:18:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46388 DF PROTO=TCP SPT=37136 DPT=9102 SEQ=622142234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760476690000000001030307) Oct 14 05:18:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12863 DF PROTO=TCP SPT=58864 DPT=9100 SEQ=87522283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76048C330000000001030307) Oct 14 05:18:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24600 DF PROTO=TCP SPT=33632 DPT=9882 SEQ=3846190781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76048CDA0000000001030307) Oct 14 05:18:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12865 DF PROTO=TCP SPT=58864 DPT=9100 SEQ=87522283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760498290000000001030307) Oct 14 05:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12866 DF PROTO=TCP SPT=58864 DPT=9100 SEQ=87522283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604A7E90000000001030307) Oct 14 05:18:38 localhost sshd[111848]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:18:38 localhost systemd-logind[760]: New session 37 of user zuul. Oct 14 05:18:38 localhost systemd[1]: Started Session 37 of User zuul. Oct 14 05:18:39 localhost python3.9[111929]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:39 localhost python3.9[112021]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14490 DF PROTO=TCP SPT=51774 DPT=9105 SEQ=3748903245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604B3290000000001030307) Oct 14 05:18:40 localhost python3.9[112113]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:41 localhost python3.9[112205]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:41 localhost python3.9[112297]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:42 localhost python3.9[112389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:43 localhost python3.9[112481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:43 localhost python3.9[112573]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5599 DF PROTO=TCP SPT=39766 DPT=9101 SEQ=619826001 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604C2130000000001030307) Oct 14 05:18:44 localhost python3.9[112665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:45 localhost python3.9[112757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:45 localhost python3.9[112849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:46 localhost python3.9[112941]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:46 localhost python3.9[113033]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5601 DF PROTO=TCP SPT=39766 DPT=9101 SEQ=619826001 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604CE290000000001030307) Oct 14 05:18:47 localhost python3.9[113125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:48 localhost python3.9[113217]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:48 localhost python3.9[113309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:49 localhost python3.9[113401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:49 localhost python3.9[113493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:50 localhost python3.9[113585]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:51 localhost python3.9[113677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5602 DF PROTO=TCP SPT=39766 DPT=9101 SEQ=619826001 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604DDE90000000001030307) Oct 14 05:18:51 localhost python3.9[113769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:53 localhost python3.9[113861]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:53 localhost python3.9[113953]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53260 DF PROTO=TCP SPT=57240 DPT=9102 SEQ=210143069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604E77E0000000001030307) Oct 14 05:18:54 localhost python3.9[114045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53261 DF PROTO=TCP SPT=57240 DPT=9102 SEQ=210143069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7604EB6A0000000001030307) Oct 14 05:18:54 localhost python3.9[114137]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:55 localhost python3.9[114229]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:56 localhost python3.9[114321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:56 localhost python3.9[114413]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:57 localhost python3.9[114505]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:58 localhost python3.9[114597]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:58 localhost python3.9[114689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:18:59 localhost python3.9[114781]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:00 localhost python3.9[114873]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10812 DF PROTO=TCP SPT=49824 DPT=9100 SEQ=191730380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760501640000000001030307) Oct 14 05:19:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48095 DF PROTO=TCP SPT=54746 DPT=9882 SEQ=1287116706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760502090000000001030307) Oct 14 05:19:00 localhost python3.9[114965]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:01 localhost python3.9[115057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:02 localhost python3.9[115149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:02 localhost python3.9[115241]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:03 localhost python3.9[115333]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10814 DF PROTO=TCP SPT=49824 DPT=9100 SEQ=191730380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76050D690000000001030307) Oct 14 05:19:04 localhost python3.9[115425]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:04 localhost python3.9[115517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:05 localhost python3.9[115609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:06 localhost python3.9[115701]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:07 localhost python3.9[115793]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10815 DF PROTO=TCP SPT=49824 DPT=9100 SEQ=191730380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76051D290000000001030307) Oct 14 05:19:08 localhost python3.9[115885]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:19:09 localhost python3.9[115977]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:19:09 localhost systemd[1]: Reloading. Oct 14 05:19:09 localhost systemd-rc-local-generator[116003]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:19:09 localhost systemd-sysv-generator[116007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:19:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:19:10 localhost python3.9[116104]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22597 DF PROTO=TCP SPT=53450 DPT=9105 SEQ=2076153917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605286A0000000001030307) Oct 14 05:19:10 localhost python3.9[116197]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:12 localhost python3.9[116290]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:13 localhost python3.9[116383]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:13 localhost python3.9[116476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46382 DF PROTO=TCP SPT=53194 DPT=9101 SEQ=1778749337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760537430000000001030307) Oct 14 05:19:14 localhost python3.9[116569]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:14 localhost python3.9[116662]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:15 localhost python3.9[116755]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:16 localhost python3.9[116848]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:16 localhost python3.9[116941]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46384 DF PROTO=TCP SPT=53194 DPT=9101 SEQ=1778749337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760543690000000001030307) Oct 14 05:19:17 localhost python3.9[117034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:18 localhost python3.9[117127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:18 localhost python3.9[117220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:19 localhost python3.9[117313]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:20 localhost python3.9[117406]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:20 localhost python3.9[117499]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:21 localhost python3.9[117637]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46385 DF PROTO=TCP SPT=53194 DPT=9101 SEQ=1778749337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605532A0000000001030307) Oct 14 05:19:21 localhost python3.9[117747]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:22 localhost python3.9[117855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:23 localhost python3.9[117948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41234 DF PROTO=TCP SPT=54664 DPT=9102 SEQ=1329604496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76055CAE0000000001030307) Oct 14 05:19:23 localhost python3.9[118041]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:24 localhost systemd[1]: session-37.scope: Deactivated successfully. Oct 14 05:19:24 localhost systemd[1]: session-37.scope: Consumed 31.997s CPU time. Oct 14 05:19:24 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Oct 14 05:19:24 localhost systemd-logind[760]: Removed session 37. Oct 14 05:19:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41235 DF PROTO=TCP SPT=54664 DPT=9102 SEQ=1329604496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760560AA0000000001030307) Oct 14 05:19:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25371 DF PROTO=TCP SPT=37316 DPT=9100 SEQ=51713555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760576940000000001030307) Oct 14 05:19:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50976 DF PROTO=TCP SPT=45100 DPT=9882 SEQ=4288038971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605773A0000000001030307) Oct 14 05:19:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25373 DF PROTO=TCP SPT=37316 DPT=9100 SEQ=51713555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760582A90000000001030307) Oct 14 05:19:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25374 DF PROTO=TCP SPT=37316 DPT=9100 SEQ=51713555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760592690000000001030307) Oct 14 05:19:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17497 DF PROTO=TCP SPT=42774 DPT=9105 SEQ=259106135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76059DAA0000000001030307) Oct 14 05:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59513 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=3915662034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605AC730000000001030307) Oct 14 05:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59515 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=3915662034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605B8690000000001030307) Oct 14 05:19:48 localhost sshd[118057]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:19:49 localhost systemd-logind[760]: New session 38 of user zuul. Oct 14 05:19:49 localhost systemd[1]: Started Session 38 of User zuul. Oct 14 05:19:49 localhost python3.9[118150]: ansible-ansible.legacy.ping Invoked with data=pong Oct 14 05:19:51 localhost python3.9[118254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:19:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59516 DF PROTO=TCP SPT=53690 DPT=9101 SEQ=3915662034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605C8290000000001030307) Oct 14 05:19:52 localhost python3.9[118346]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:19:52 localhost python3.9[118439]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58024 DF PROTO=TCP SPT=36732 DPT=9102 SEQ=685294195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605D1DE0000000001030307) Oct 14 05:19:53 localhost python3.9[118531]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:54 localhost python3.9[118623]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:19:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58025 DF PROTO=TCP SPT=36732 DPT=9102 SEQ=685294195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605D5E90000000001030307) Oct 14 05:19:55 localhost python3.9[118696]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433594.1568823-177-278284949056625/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:19:56 localhost python3.9[118788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:19:57 localhost python3.9[118884]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:19:58 localhost python3.9[118974]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:19:58 localhost network[118991]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:19:58 localhost network[118992]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:19:58 localhost network[118993]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:19:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:20:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36452 DF PROTO=TCP SPT=58642 DPT=9100 SEQ=1472204630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605EBC40000000001030307) Oct 14 05:20:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59479 DF PROTO=TCP SPT=40018 DPT=9882 SEQ=3162749902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605EC6A0000000001030307) Oct 14 05:20:02 localhost python3.9[119191]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:20:03 localhost python3.9[119281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36454 DF PROTO=TCP SPT=58642 DPT=9100 SEQ=1472204630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7605F7E90000000001030307) Oct 14 05:20:04 localhost python3.9[119377]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36455 DF PROTO=TCP SPT=58642 DPT=9100 SEQ=1472204630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760607A90000000001030307) Oct 14 05:20:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17963 DF PROTO=TCP SPT=41198 DPT=9105 SEQ=1106658468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760612E90000000001030307) Oct 14 05:20:13 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 14 05:20:13 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 14 05:20:13 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 14 05:20:13 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 14 05:20:13 localhost systemd[1]: Stopping sshd-keygen.target... Oct 14 05:20:13 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:13 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:13 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:13 localhost systemd[1]: Reached target sshd-keygen.target. Oct 14 05:20:13 localhost systemd[1]: Starting OpenSSH server daemon... Oct 14 05:20:13 localhost sshd[119420]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:20:13 localhost systemd[1]: Started OpenSSH server daemon. Oct 14 05:20:13 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 05:20:14 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 05:20:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 05:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55627 DF PROTO=TCP SPT=56230 DPT=9101 SEQ=1554120390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760621A30000000001030307) Oct 14 05:20:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 05:20:14 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 05:20:14 localhost systemd[1]: run-rc22c93d2658b4f269d08dcddd5c6b053.service: Deactivated successfully. Oct 14 05:20:14 localhost systemd[1]: run-rab56166c98f341db8c03495e6d361cc0.service: Deactivated successfully. Oct 14 05:20:15 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 14 05:20:15 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 14 05:20:15 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 14 05:20:15 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 14 05:20:15 localhost systemd[1]: Stopping sshd-keygen.target... Oct 14 05:20:15 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:15 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:15 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:20:15 localhost systemd[1]: Reached target sshd-keygen.target. Oct 14 05:20:15 localhost systemd[1]: Starting OpenSSH server daemon... Oct 14 05:20:15 localhost sshd[119599]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:20:15 localhost systemd[1]: Started OpenSSH server daemon. Oct 14 05:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55629 DF PROTO=TCP SPT=56230 DPT=9101 SEQ=1554120390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76062DA90000000001030307) Oct 14 05:20:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55630 DF PROTO=TCP SPT=56230 DPT=9101 SEQ=1554120390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76063D690000000001030307) Oct 14 05:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30883 DF PROTO=TCP SPT=43074 DPT=9102 SEQ=1583094010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606470F0000000001030307) Oct 14 05:20:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30884 DF PROTO=TCP SPT=43074 DPT=9102 SEQ=1583094010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76064B290000000001030307) Oct 14 05:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47268 DF PROTO=TCP SPT=37612 DPT=9100 SEQ=1730710400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760660F30000000001030307) Oct 14 05:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35822 DF PROTO=TCP SPT=59820 DPT=9882 SEQ=3435120268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606619A0000000001030307) Oct 14 05:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47270 DF PROTO=TCP SPT=37612 DPT=9100 SEQ=1730710400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76066CE90000000001030307) Oct 14 05:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47271 DF PROTO=TCP SPT=37612 DPT=9100 SEQ=1730710400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76067CA90000000001030307) Oct 14 05:20:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62066 DF PROTO=TCP SPT=59424 DPT=9105 SEQ=3574915168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760687E90000000001030307) Oct 14 05:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52326 DF PROTO=TCP SPT=55680 DPT=9101 SEQ=3072650309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760696D30000000001030307) Oct 14 05:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52328 DF PROTO=TCP SPT=55680 DPT=9101 SEQ=3072650309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606A2E90000000001030307) Oct 14 05:20:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52329 DF PROTO=TCP SPT=55680 DPT=9101 SEQ=3072650309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606B2A90000000001030307) Oct 14 05:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21419 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1895926997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606BC3D0000000001030307) Oct 14 05:20:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21420 DF PROTO=TCP SPT=51984 DPT=9102 SEQ=1895926997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606C02A0000000001030307) Oct 14 05:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25948 DF PROTO=TCP SPT=34716 DPT=9100 SEQ=2566759174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606D6240000000001030307) Oct 14 05:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18282 DF PROTO=TCP SPT=41940 DPT=9882 SEQ=922843173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606D6CA0000000001030307) Oct 14 05:21:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25950 DF PROTO=TCP SPT=34716 DPT=9100 SEQ=2566759174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606E22A0000000001030307) Oct 14 05:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25951 DF PROTO=TCP SPT=34716 DPT=9100 SEQ=2566759174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606F1E90000000001030307) Oct 14 05:21:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17229 DF PROTO=TCP SPT=52010 DPT=9105 SEQ=3277616970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7606FD290000000001030307) Oct 14 05:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5614 DF PROTO=TCP SPT=37600 DPT=9101 SEQ=3824874219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76070C030000000001030307) Oct 14 05:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5616 DF PROTO=TCP SPT=37600 DPT=9101 SEQ=3824874219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760718290000000001030307) Oct 14 05:21:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5617 DF PROTO=TCP SPT=37600 DPT=9101 SEQ=3824874219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760727E90000000001030307) Oct 14 05:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9858 DF PROTO=TCP SPT=46468 DPT=9102 SEQ=2347842854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607316E0000000001030307) Oct 14 05:21:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9859 DF PROTO=TCP SPT=46468 DPT=9102 SEQ=2347842854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760735690000000001030307) Oct 14 05:21:26 localhost kernel: SELinux: Converting 2741 SID table entries... Oct 14 05:21:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:21:26 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:21:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:21:26 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:21:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:21:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:21:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:21:28 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Oct 14 05:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45036 DF PROTO=TCP SPT=41658 DPT=9100 SEQ=3577265501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76074B540000000001030307) Oct 14 05:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31109 DF PROTO=TCP SPT=50058 DPT=9882 SEQ=2887571853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76074BF90000000001030307) Oct 14 05:21:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45038 DF PROTO=TCP SPT=41658 DPT=9100 SEQ=3577265501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760757690000000001030307) Oct 14 05:21:33 localhost python3.9[120372]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:21:34 localhost python3.9[120464]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:21:34 localhost python3.9[120537]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433693.7238207-399-254778959996502/.source.fact _original_basename=.1ehjn917 follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:21:35 localhost python3.9[120627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:21:36 localhost python3.9[120725]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45039 DF PROTO=TCP SPT=41658 DPT=9100 SEQ=3577265501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760767290000000001030307) Oct 14 05:21:37 localhost python3.9[120779]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:21:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45690 DF PROTO=TCP SPT=39424 DPT=9105 SEQ=3440570455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760772690000000001030307) Oct 14 05:21:41 localhost systemd[1]: Reloading. Oct 14 05:21:41 localhost systemd-rc-local-generator[120813]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:21:41 localhost systemd-sysv-generator[120818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:21:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:21:41 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 05:21:43 localhost python3.9[120919]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3618 DF PROTO=TCP SPT=39364 DPT=9101 SEQ=3370908273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760781350000000001030307) Oct 14 05:21:45 localhost python3.9[121158]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Oct 14 05:21:46 localhost python3.9[121250]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Oct 14 05:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3620 DF PROTO=TCP SPT=39364 DPT=9101 SEQ=3370908273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76078D2A0000000001030307) Oct 14 05:21:47 localhost python3.9[121343]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:21:48 localhost python3.9[121435]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Oct 14 05:21:49 localhost python3.9[121527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:21:50 localhost python3.9[121619]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:21:51 localhost python3.9[121692]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760433710.1483161-723-21655246359489/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:21:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3621 DF PROTO=TCP SPT=39364 DPT=9101 SEQ=3370908273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76079CE90000000001030307) Oct 14 05:21:52 localhost python3.9[121784]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Oct 14 05:21:53 localhost python3.9[121877]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Oct 14 05:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57798 DF PROTO=TCP SPT=33710 DPT=9102 SEQ=1912812784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607A69E0000000001030307) Oct 14 05:21:54 localhost python3.9[121970]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 14 05:21:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57799 DF PROTO=TCP SPT=33710 DPT=9102 SEQ=1912812784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607AAA90000000001030307) Oct 14 05:21:55 localhost python3.9[122068]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Oct 14 05:21:56 localhost python3.9[122160]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:22:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59642 DF PROTO=TCP SPT=48634 DPT=9100 SEQ=2514516798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607C0830000000001030307) Oct 14 05:22:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45253 DF PROTO=TCP SPT=45838 DPT=9882 SEQ=838602350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607C12A0000000001030307) Oct 14 05:22:00 localhost python3.9[122254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:01 localhost python3.9[122347]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:22:01 localhost python3.9[122420]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760433720.836877-966-269745723677442/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:03 localhost python3.9[122512]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:22:03 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 14 05:22:03 localhost systemd[1]: Stopped Load Kernel Modules. Oct 14 05:22:03 localhost systemd[1]: Stopping Load Kernel Modules... Oct 14 05:22:03 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 05:22:03 localhost systemd-modules-load[122516]: Module 'msr' is built in Oct 14 05:22:03 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 05:22:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59644 DF PROTO=TCP SPT=48634 DPT=9100 SEQ=2514516798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607CCA90000000001030307) Oct 14 05:22:03 localhost python3.9[122608]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:22:04 localhost python3.9[122681]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760433723.3553202-1035-227451654426321/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59645 DF PROTO=TCP SPT=48634 DPT=9100 SEQ=2514516798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607DC690000000001030307) Oct 14 05:22:08 localhost python3.9[122773]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:22:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20 DF PROTO=TCP SPT=38332 DPT=9105 SEQ=3309481390 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607E7A90000000001030307) Oct 14 05:22:13 localhost python3.9[122865]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55081 DF PROTO=TCP SPT=33176 DPT=9101 SEQ=3131321060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7607F6620000000001030307) Oct 14 05:22:16 localhost python3.9[122957]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 14 05:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55083 DF PROTO=TCP SPT=33176 DPT=9101 SEQ=3131321060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760802690000000001030307) Oct 14 05:22:17 localhost python3.9[123047]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:22:18 localhost python3.9[123139]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:22:19 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 14 05:22:19 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 14 05:22:19 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 14 05:22:19 localhost systemd[1]: tuned.service: Consumed 2.030s CPU time, no IO. Oct 14 05:22:19 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 14 05:22:20 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 14 05:22:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55084 DF PROTO=TCP SPT=33176 DPT=9101 SEQ=3131321060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760812290000000001030307) Oct 14 05:22:21 localhost python3.9[123241]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 14 05:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3921 DF PROTO=TCP SPT=49296 DPT=9102 SEQ=3340087408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76081BCE0000000001030307) Oct 14 05:22:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3922 DF PROTO=TCP SPT=49296 DPT=9102 SEQ=3340087408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76081FE90000000001030307) Oct 14 05:22:25 localhost python3.9[123333]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:22:25 localhost systemd[1]: Reloading. Oct 14 05:22:25 localhost systemd-sysv-generator[123365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:22:25 localhost systemd-rc-local-generator[123362]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:22:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:22:26 localhost python3.9[123463]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:22:26 localhost systemd[1]: Reloading. Oct 14 05:22:26 localhost systemd-rc-local-generator[123492]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:22:26 localhost systemd-sysv-generator[123496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:22:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:22:28 localhost python3.9[123593]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:28 localhost python3.9[123716]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:28 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Oct 14 05:22:29 localhost python3.9[123841]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27698 DF PROTO=TCP SPT=59824 DPT=9100 SEQ=2356540863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760835B30000000001030307) Oct 14 05:22:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7725 DF PROTO=TCP SPT=52470 DPT=9882 SEQ=916142593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608365A0000000001030307) Oct 14 05:22:31 localhost python3.9[123990]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:32 localhost python3.9[124083]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:22:32 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 14 05:22:32 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 14 05:22:32 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 14 05:22:32 localhost systemd[1]: Starting Apply Kernel Variables... Oct 14 05:22:32 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 14 05:22:32 localhost systemd[1]: Finished Apply Kernel Variables. Oct 14 05:22:32 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Oct 14 05:22:32 localhost systemd[1]: session-38.scope: Deactivated successfully. Oct 14 05:22:32 localhost systemd[1]: session-38.scope: Consumed 1min 57.897s CPU time. Oct 14 05:22:32 localhost systemd-logind[760]: Removed session 38. Oct 14 05:22:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27700 DF PROTO=TCP SPT=59824 DPT=9100 SEQ=2356540863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760841A90000000001030307) Oct 14 05:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27701 DF PROTO=TCP SPT=59824 DPT=9100 SEQ=2356540863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608516A0000000001030307) Oct 14 05:22:38 localhost sshd[124118]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:22:39 localhost systemd-logind[760]: New session 39 of user zuul. Oct 14 05:22:39 localhost systemd[1]: Started Session 39 of User zuul. Oct 14 05:22:40 localhost python3.9[124211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:22:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23237 DF PROTO=TCP SPT=35938 DPT=9105 SEQ=1847129718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76085CA90000000001030307) Oct 14 05:22:41 localhost python3.9[124305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5658 writes, 25K keys, 5658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5658 writes, 708 syncs, 7.99 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:22:42 localhost python3.9[124401]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:43 localhost python3.9[124492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32706 DF PROTO=TCP SPT=56110 DPT=9101 SEQ=1339381018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76086B930000000001030307) Oct 14 05:22:44 localhost python3.9[124588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:22:45 localhost python3.9[124643]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4839 writes, 21K keys, 4839 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4839 writes, 659 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32708 DF PROTO=TCP SPT=56110 DPT=9101 SEQ=1339381018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760877A90000000001030307) Oct 14 05:22:50 localhost python3.9[124737]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:22:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32709 DF PROTO=TCP SPT=56110 DPT=9101 SEQ=1339381018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760887690000000001030307) Oct 14 05:22:52 localhost python3.9[124884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:22:53 localhost python3.9[124976]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63634 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=103314210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760890FD0000000001030307) Oct 14 05:22:54 localhost python3.9[125081]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:22:54 localhost python3.9[125129]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:22:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63635 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=103314210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760894E90000000001030307) Oct 14 05:22:55 localhost python3.9[125221]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:22:56 localhost python3.9[125294]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760433774.864567-323-167066695155482/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:57 localhost python3.9[125386]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:57 localhost python3.9[125478]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:58 localhost python3.9[125570]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:59 localhost python3.9[125662]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:22:59 localhost python3.9[125752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:23:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16535 DF PROTO=TCP SPT=44808 DPT=9100 SEQ=2072586010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608AAE30000000001030307) Oct 14 05:23:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50287 DF PROTO=TCP SPT=43804 DPT=9882 SEQ=4073966585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608AB890000000001030307) Oct 14 05:23:00 localhost python3.9[125846]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16537 DF PROTO=TCP SPT=44808 DPT=9100 SEQ=2072586010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608B6E90000000001030307) Oct 14 05:23:04 localhost python3.9[125940]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16538 DF PROTO=TCP SPT=44808 DPT=9100 SEQ=2072586010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608C6A90000000001030307) Oct 14 05:23:09 localhost python3.9[126034]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8712 DF PROTO=TCP SPT=46752 DPT=9105 SEQ=2634234176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608D1E90000000001030307) Oct 14 05:23:13 localhost python3.9[126134]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8048 DF PROTO=TCP SPT=40856 DPT=9101 SEQ=1957687046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608E0C30000000001030307) Oct 14 05:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8050 DF PROTO=TCP SPT=40856 DPT=9101 SEQ=1957687046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608ECEA0000000001030307) Oct 14 05:23:17 localhost python3.9[126228]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8051 DF PROTO=TCP SPT=40856 DPT=9101 SEQ=1957687046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7608FCA90000000001030307) Oct 14 05:23:22 localhost python3.9[126322]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26431 DF PROTO=TCP SPT=59268 DPT=9102 SEQ=393316827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609062E0000000001030307) Oct 14 05:23:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26432 DF PROTO=TCP SPT=59268 DPT=9102 SEQ=393316827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76090A2A0000000001030307) Oct 14 05:23:26 localhost python3.9[126416]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:23:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61513 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=115806418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760920130000000001030307) Oct 14 05:23:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43331 DF PROTO=TCP SPT=41542 DPT=9882 SEQ=1602175340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760920B90000000001030307) Oct 14 05:23:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61515 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=115806418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76092C2A0000000001030307) Oct 14 05:23:35 localhost podman[126578]: Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.300434681 +0000 UTC m=+0.073977994 container create d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64) Oct 14 05:23:35 localhost systemd[1]: Started libpod-conmon-d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4.scope. Oct 14 05:23:35 localhost systemd[1]: Started libcrun container. Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.264073171 +0000 UTC m=+0.037616884 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.364885899 +0000 UTC m=+0.138429222 container init d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, version=7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, name=rhceph) Oct 14 05:23:35 localhost systemd[1]: tmp-crun.cFl7gx.mount: Deactivated successfully. Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.377164696 +0000 UTC m=+0.150708039 container start d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, release=553, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7) Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.377398272 +0000 UTC m=+0.150941665 container attach d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 14 05:23:35 localhost focused_chatelet[126599]: 167 167 Oct 14 05:23:35 localhost systemd[1]: libpod-d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4.scope: Deactivated successfully. Oct 14 05:23:35 localhost podman[126578]: 2025-10-14 09:23:35.381383798 +0000 UTC m=+0.154927151 container died d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 05:23:35 localhost podman[126608]: 2025-10-14 09:23:35.464219296 +0000 UTC m=+0.071888988 container remove d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_chatelet, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_BRANCH=main, release=553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc.) Oct 14 05:23:35 localhost systemd[1]: libpod-conmon-d6bd2ccb54a25e28a625c7de09eec6e96d8a4b8e35013e58296f0f3a758e70a4.scope: Deactivated successfully. Oct 14 05:23:35 localhost podman[126637]: Oct 14 05:23:35 localhost podman[126637]: 2025-10-14 09:23:35.648215551 +0000 UTC m=+0.072577176 container create 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Oct 14 05:23:35 localhost systemd[1]: Started libpod-conmon-67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1.scope. Oct 14 05:23:35 localhost systemd[1]: Started libcrun container. Oct 14 05:23:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c0e36202b2336068ac574f7622603baeeafc52344d359eef755a78e9a2c98/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 05:23:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c0e36202b2336068ac574f7622603baeeafc52344d359eef755a78e9a2c98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 05:23:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c0e36202b2336068ac574f7622603baeeafc52344d359eef755a78e9a2c98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 05:23:35 localhost podman[126637]: 2025-10-14 09:23:35.704142422 +0000 UTC m=+0.128504047 container init 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main) Oct 14 05:23:35 localhost podman[126637]: 2025-10-14 09:23:35.713528352 +0000 UTC m=+0.137889977 container start 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12) Oct 14 05:23:35 localhost podman[126637]: 2025-10-14 09:23:35.71384733 +0000 UTC m=+0.138209005 container attach 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main) Oct 14 05:23:35 localhost podman[126637]: 2025-10-14 09:23:35.62004493 +0000 UTC m=+0.044406585 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 05:23:36 localhost systemd[1]: var-lib-containers-storage-overlay-2c5f6817687efdfddf5d0204c3589acb15d96ba07a1c0d0ec56c9b59df030e78-merged.mount: Deactivated successfully. Oct 14 05:23:36 localhost goofy_hamilton[126653]: [ Oct 14 05:23:36 localhost goofy_hamilton[126653]: { Oct 14 05:23:36 localhost goofy_hamilton[126653]: "available": false, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "ceph_device": false, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "lsm_data": {}, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "lvs": [], Oct 14 05:23:36 localhost goofy_hamilton[126653]: "path": "/dev/sr0", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "rejected_reasons": [ Oct 14 05:23:36 localhost goofy_hamilton[126653]: "Insufficient space (<5GB)", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "Has a FileSystem" Oct 14 05:23:36 localhost goofy_hamilton[126653]: ], Oct 14 05:23:36 localhost goofy_hamilton[126653]: "sys_api": { Oct 14 05:23:36 localhost goofy_hamilton[126653]: "actuators": null, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "device_nodes": "sr0", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "human_readable_size": "482.00 KB", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "id_bus": "ata", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "model": "QEMU DVD-ROM", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "nr_requests": "2", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "partitions": {}, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "path": "/dev/sr0", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "removable": "1", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "rev": "2.5+", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "ro": "0", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "rotational": "1", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "sas_address": "", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "sas_device_handle": "", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "scheduler_mode": "mq-deadline", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "sectors": 0, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "sectorsize": "2048", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "size": 493568.0, Oct 14 05:23:36 localhost goofy_hamilton[126653]: "support_discard": "0", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "type": "disk", Oct 14 05:23:36 localhost goofy_hamilton[126653]: "vendor": "QEMU" Oct 14 05:23:36 localhost goofy_hamilton[126653]: } Oct 14 05:23:36 localhost goofy_hamilton[126653]: } Oct 14 05:23:36 localhost goofy_hamilton[126653]: ] Oct 14 05:23:36 localhost systemd[1]: libpod-67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1.scope: Deactivated successfully. Oct 14 05:23:36 localhost podman[126637]: 2025-10-14 09:23:36.609967378 +0000 UTC m=+1.034329013 container died 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, release=553, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph) Oct 14 05:23:36 localhost systemd[1]: tmp-crun.0AYK9Y.mount: Deactivated successfully. Oct 14 05:23:36 localhost systemd[1]: var-lib-containers-storage-overlay-239c0e36202b2336068ac574f7622603baeeafc52344d359eef755a78e9a2c98-merged.mount: Deactivated successfully. Oct 14 05:23:36 localhost podman[128208]: 2025-10-14 09:23:36.698047966 +0000 UTC m=+0.077840266 container remove 67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_hamilton, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, release=553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Oct 14 05:23:36 localhost systemd[1]: libpod-conmon-67c9c8dc36d986e01ec808e39e57dae3a62880013b5f3e03e6cb9cfedd5191b1.scope: Deactivated successfully. Oct 14 05:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61516 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=115806418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76093BE90000000001030307) Oct 14 05:23:37 localhost python3.9[128315]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:23:38 localhost python3.9[128435]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:23:38 localhost python3.9[128508]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760433817.8537338-719-67467042483422/.source.json _original_basename=.3kzowoae follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:23:40 localhost python3.9[128600]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:23:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3060 DF PROTO=TCP SPT=56386 DPT=9105 SEQ=806652765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760947290000000001030307) Oct 14 05:23:40 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 77.2 (257 of 333 items), suggesting rotation. Oct 14 05:23:40 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:23:40 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:23:40 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43017 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=3844289002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760955F30000000001030307) Oct 14 05:23:46 localhost podman[128613]: 2025-10-14 09:23:40.208429542 +0000 UTC m=+0.044465346 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 14 05:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43019 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=3844289002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760961E90000000001030307) Oct 14 05:23:47 localhost python3.9[128813]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:23:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43020 DF PROTO=TCP SPT=50200 DPT=9101 SEQ=3844289002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760971A90000000001030307) Oct 14 05:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43981 DF PROTO=TCP SPT=45714 DPT=9102 SEQ=1525901998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76097B5E0000000001030307) Oct 14 05:23:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43982 DF PROTO=TCP SPT=45714 DPT=9102 SEQ=1525901998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76097F6A0000000001030307) Oct 14 05:23:55 localhost podman[128825]: 2025-10-14 09:23:47.480255384 +0000 UTC m=+0.038589779 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 05:23:56 localhost python3.9[129024]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:23:58 localhost podman[129036]: 2025-10-14 09:23:56.863537442 +0000 UTC m=+0.044557349 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 14 05:23:59 localhost python3.9[129199]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:24:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7220 DF PROTO=TCP SPT=36330 DPT=9100 SEQ=342837319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760995450000000001030307) Oct 14 05:24:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8411 DF PROTO=TCP SPT=38788 DPT=9882 SEQ=3589896357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760995EA0000000001030307) Oct 14 05:24:01 localhost podman[129211]: 2025-10-14 09:24:00.037951131 +0000 UTC m=+0.043338315 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 05:24:02 localhost python3.9[129375]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:24:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7222 DF PROTO=TCP SPT=36330 DPT=9100 SEQ=342837319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609A1690000000001030307) Oct 14 05:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7223 DF PROTO=TCP SPT=36330 DPT=9100 SEQ=342837319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609B1290000000001030307) Oct 14 05:24:07 localhost podman[129387]: 2025-10-14 09:24:02.610443365 +0000 UTC m=+0.041392603 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 14 05:24:08 localhost python3.9[129587]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 14 05:24:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48564 DF PROTO=TCP SPT=59430 DPT=9105 SEQ=2049452176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609BC690000000001030307) Oct 14 05:24:10 localhost podman[129601]: 2025-10-14 09:24:08.821360167 +0000 UTC m=+0.041516277 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Oct 14 05:24:11 localhost systemd[1]: session-39.scope: Deactivated successfully. Oct 14 05:24:11 localhost systemd[1]: session-39.scope: Consumed 1min 35.512s CPU time. Oct 14 05:24:11 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Oct 14 05:24:11 localhost systemd-logind[760]: Removed session 39. Oct 14 05:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55276 DF PROTO=TCP SPT=42472 DPT=9101 SEQ=503862748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609CB250000000001030307) Oct 14 05:24:17 localhost sshd[130016]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55278 DF PROTO=TCP SPT=42472 DPT=9101 SEQ=503862748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609D7290000000001030307) Oct 14 05:24:17 localhost systemd-logind[760]: New session 40 of user zuul. Oct 14 05:24:17 localhost systemd[1]: Started Session 40 of User zuul. Oct 14 05:24:20 localhost python3.9[130109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:24:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55279 DF PROTO=TCP SPT=42472 DPT=9101 SEQ=503862748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609E6E90000000001030307) Oct 14 05:24:22 localhost python3.9[130205]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Oct 14 05:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13847 DF PROTO=TCP SPT=35974 DPT=9102 SEQ=2421297588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609F08E0000000001030307) Oct 14 05:24:24 localhost python3.9[130298]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:24:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13848 DF PROTO=TCP SPT=35974 DPT=9102 SEQ=2421297588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7609F4A90000000001030307) Oct 14 05:24:25 localhost python3.9[130352]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:24:30 localhost python3.9[130446]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:24:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2966 DF PROTO=TCP SPT=53500 DPT=9100 SEQ=1846799579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A0A740000000001030307) Oct 14 05:24:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56214 DF PROTO=TCP SPT=48706 DPT=9882 SEQ=1077706843 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A0B1A0000000001030307) Oct 14 05:24:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2968 DF PROTO=TCP SPT=53500 DPT=9100 SEQ=1846799579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A16690000000001030307) Oct 14 05:24:34 localhost python3.9[130540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:24:36 localhost python3.9[130633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:24:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2969 DF PROTO=TCP SPT=53500 DPT=9100 SEQ=1846799579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A26290000000001030307) Oct 14 05:24:37 localhost python3.9[130725]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Oct 14 05:24:38 localhost kernel: SELinux: Converting 2743 SID table entries... Oct 14 05:24:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:24:38 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:24:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:24:38 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:24:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:24:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:24:38 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:24:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49901 DF PROTO=TCP SPT=60424 DPT=9105 SEQ=2383132740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A31690000000001030307) Oct 14 05:24:41 localhost python3.9[130881]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:24:41 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Oct 14 05:24:42 localhost python3.9[130994]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8432 DF PROTO=TCP SPT=51864 DPT=9101 SEQ=1493102769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A40530000000001030307) Oct 14 05:24:46 localhost python3.9[131088]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8434 DF PROTO=TCP SPT=51864 DPT=9101 SEQ=1493102769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A4C690000000001030307) Oct 14 05:24:48 localhost python3.9[131333]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:24:49 localhost python3.9[131423]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:24:49 localhost python3.9[131517]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:24:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8435 DF PROTO=TCP SPT=51864 DPT=9101 SEQ=1493102769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A5C290000000001030307) Oct 14 05:24:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14850 DF PROTO=TCP SPT=58104 DPT=9102 SEQ=3352140057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A65BE0000000001030307) Oct 14 05:24:53 localhost python3.9[131611]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:24:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14851 DF PROTO=TCP SPT=58104 DPT=9102 SEQ=3352140057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A69A90000000001030307) Oct 14 05:24:58 localhost python3.9[131705]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 05:24:58 localhost systemd[1]: Reloading. Oct 14 05:24:58 localhost systemd-rc-local-generator[131731]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:24:58 localhost systemd-sysv-generator[131735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:24:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:24:59 localhost python3.9[131837]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:25:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46986 DF PROTO=TCP SPT=41056 DPT=9100 SEQ=3796679257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A7FA30000000001030307) Oct 14 05:25:00 localhost python3.9[131929]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4511 DF PROTO=TCP SPT=44816 DPT=9882 SEQ=2259104354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A804A0000000001030307) Oct 14 05:25:01 localhost python3.9[132023]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:01 localhost python3.9[132115]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:02 localhost python3.9[132207]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46988 DF PROTO=TCP SPT=41056 DPT=9100 SEQ=3796679257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A8BA90000000001030307) Oct 14 05:25:03 localhost python3.9[132280]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433902.301425-563-73355485140771/.source _original_basename=.clzhuvxk follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:04 localhost python3.9[132372]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:05 localhost python3.9[132464]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Oct 14 05:25:05 localhost python3.9[132556]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:06 localhost python3.9[132648]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:07 localhost python3.9[132721]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433906.3230133-689-132027585015414/.source.yaml _original_basename=.j_l7hhcl follow=False checksum=0cadac3cfc033a4e07cfac59b43f6459e787700a force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46989 DF PROTO=TCP SPT=41056 DPT=9100 SEQ=3796679257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760A9B690000000001030307) Oct 14 05:25:08 localhost python3.9[132813]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Oct 14 05:25:09 localhost ansible-async_wrapper.py[132918]: Invoked with j870175321968 300 /home/zuul/.ansible/tmp/ansible-tmp-1760433908.7404234-761-243959588091737/AnsiballZ_edpm_os_net_config.py _ Oct 14 05:25:09 localhost ansible-async_wrapper.py[132921]: Starting module and watcher Oct 14 05:25:09 localhost ansible-async_wrapper.py[132921]: Start watching 132922 (300) Oct 14 05:25:09 localhost ansible-async_wrapper.py[132922]: Start module (132922) Oct 14 05:25:09 localhost ansible-async_wrapper.py[132918]: Return async_wrapper task started. Oct 14 05:25:09 localhost python3.9[132923]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Oct 14 05:25:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45275 DF PROTO=TCP SPT=55832 DPT=9105 SEQ=1951791521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AA6A90000000001030307) Oct 14 05:25:10 localhost ansible-async_wrapper.py[132922]: Module complete (132922) Oct 14 05:25:13 localhost python3.9[133015]: ansible-ansible.legacy.async_status Invoked with jid=j870175321968.132918 mode=status _async_dir=/root/.ansible_async Oct 14 05:25:13 localhost python3.9[133074]: ansible-ansible.legacy.async_status Invoked with jid=j870175321968.132918 mode=cleanup _async_dir=/root/.ansible_async Oct 14 05:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65357 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=3141294577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AB5830000000001030307) Oct 14 05:25:14 localhost python3.9[133166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:14 localhost ansible-async_wrapper.py[132921]: Done in kid B. Oct 14 05:25:15 localhost python3.9[133239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433913.994977-827-63934990314288/.source.returncode _original_basename=.tscid_6j follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:15 localhost python3.9[133331]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:16 localhost python3.9[133404]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433915.2301347-875-258023645194807/.source.cfg _original_basename=.bwa0a7pp follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:17 localhost python3.9[133496]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:25:17 localhost systemd[1]: Reloading Network Manager... Oct 14 05:25:17 localhost NetworkManager[5972]: [1760433917.2092] audit: op="reload" arg="0" pid=133500 uid=0 result="success" Oct 14 05:25:17 localhost NetworkManager[5972]: [1760433917.2101] config: signal: SIGHUP (no changes from disk) Oct 14 05:25:17 localhost systemd[1]: Reloaded Network Manager. Oct 14 05:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65359 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=3141294577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AC1A90000000001030307) Oct 14 05:25:18 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Oct 14 05:25:18 localhost systemd[1]: session-40.scope: Deactivated successfully. Oct 14 05:25:18 localhost systemd[1]: session-40.scope: Consumed 35.171s CPU time. Oct 14 05:25:18 localhost systemd-logind[760]: Removed session 40. Oct 14 05:25:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65360 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=3141294577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AD1690000000001030307) Oct 14 05:25:23 localhost sshd[133515]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:25:23 localhost systemd-logind[760]: New session 41 of user zuul. Oct 14 05:25:23 localhost systemd[1]: Started Session 41 of User zuul. Oct 14 05:25:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3330 DF PROTO=TCP SPT=54576 DPT=9102 SEQ=4229951591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760ADAEE0000000001030307) Oct 14 05:25:24 localhost python3.9[133608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:25:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3331 DF PROTO=TCP SPT=54576 DPT=9102 SEQ=4229951591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760ADEE90000000001030307) Oct 14 05:25:25 localhost python3.9[133702]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:25:26 localhost python3.9[133847]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:25:27 localhost systemd[1]: session-41.scope: Deactivated successfully. Oct 14 05:25:27 localhost systemd[1]: session-41.scope: Consumed 2.172s CPU time. Oct 14 05:25:27 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Oct 14 05:25:27 localhost systemd-logind[760]: Removed session 41. Oct 14 05:25:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54921 DF PROTO=TCP SPT=53714 DPT=9100 SEQ=4212515547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AF4D50000000001030307) Oct 14 05:25:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43080 DF PROTO=TCP SPT=40400 DPT=9882 SEQ=4126509808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760AF57A0000000001030307) Oct 14 05:25:32 localhost sshd[133863]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:25:32 localhost systemd-logind[760]: New session 42 of user zuul. Oct 14 05:25:32 localhost systemd[1]: Started Session 42 of User zuul. Oct 14 05:25:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54923 DF PROTO=TCP SPT=53714 DPT=9100 SEQ=4212515547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B00E90000000001030307) Oct 14 05:25:33 localhost python3.9[133956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:25:35 localhost python3.9[134050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:25:36 localhost python3.9[134146]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:25:37 localhost python3.9[134200]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:25:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54924 DF PROTO=TCP SPT=53714 DPT=9100 SEQ=4212515547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B10A90000000001030307) Oct 14 05:25:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2312 DF PROTO=TCP SPT=55314 DPT=9105 SEQ=1583818019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B1BE90000000001030307) Oct 14 05:25:41 localhost python3.9[134294]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:25:42 localhost python3.9[134507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:43 localhost python3.9[134646]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49489 DF PROTO=TCP SPT=49538 DPT=9101 SEQ=2421212862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B2AB40000000001030307) Oct 14 05:25:44 localhost python3.9[134764]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:44 localhost python3.9[134812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:25:45 localhost python3.9[134904]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:25:45 localhost python3.9[134952]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:25:46 localhost python3.9[135044]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49491 DF PROTO=TCP SPT=49538 DPT=9101 SEQ=2421212862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B36A90000000001030307) Oct 14 05:25:47 localhost python3.9[135136]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:25:48 localhost python3.9[135228]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:25:48 localhost python3.9[135320]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 14 05:25:49 localhost python3.9[135412]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:25:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49492 DF PROTO=TCP SPT=49538 DPT=9101 SEQ=2421212862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B46690000000001030307) Oct 14 05:25:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10308 DF PROTO=TCP SPT=35958 DPT=9102 SEQ=562330370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B501E0000000001030307) Oct 14 05:25:53 localhost python3.9[135506]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:25:54 localhost python3.9[135600]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:25:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10309 DF PROTO=TCP SPT=35958 DPT=9102 SEQ=562330370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B54290000000001030307) Oct 14 05:25:55 localhost python3.9[135692]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:25:56 localhost python3.9[135784]: ansible-service_facts Invoked Oct 14 05:25:56 localhost network[135801]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:25:56 localhost network[135802]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:25:56 localhost network[135803]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:25:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:26:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38264 DF PROTO=TCP SPT=36722 DPT=9100 SEQ=3804299468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B6A030000000001030307) Oct 14 05:26:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27245 DF PROTO=TCP SPT=43356 DPT=9882 SEQ=2307366396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B6AAA0000000001030307) Oct 14 05:26:02 localhost python3.9[136125]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:26:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38266 DF PROTO=TCP SPT=36722 DPT=9100 SEQ=3804299468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B76290000000001030307) Oct 14 05:26:06 localhost python3.9[136219]: ansible-package_facts Invoked with manager=['auto'] strategy=first Oct 14 05:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38267 DF PROTO=TCP SPT=36722 DPT=9100 SEQ=3804299468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B85E90000000001030307) Oct 14 05:26:08 localhost python3.9[136311]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:09 localhost python3.9[136386]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433968.0154185-620-67392235970856/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:09 localhost python3.9[136480]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39409 DF PROTO=TCP SPT=60488 DPT=9105 SEQ=2496040221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B91290000000001030307) Oct 14 05:26:10 localhost python3.9[136555]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433969.5172431-665-215813222424797/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:12 localhost python3.9[136649]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:13 localhost python3.9[136743]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:26:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50551 DF PROTO=TCP SPT=45666 DPT=9101 SEQ=3351297969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760B9FE20000000001030307) Oct 14 05:26:15 localhost python3.9[136797]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50553 DF PROTO=TCP SPT=45666 DPT=9101 SEQ=3351297969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BABE90000000001030307) Oct 14 05:26:17 localhost python3.9[136891]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:26:18 localhost python3.9[136945]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:26:18 localhost chronyd[25893]: chronyd exiting Oct 14 05:26:18 localhost systemd[1]: Stopping NTP client/server... Oct 14 05:26:18 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 14 05:26:18 localhost systemd[1]: Stopped NTP client/server. Oct 14 05:26:18 localhost systemd[1]: Starting NTP client/server... Oct 14 05:26:18 localhost chronyd[136953]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 14 05:26:18 localhost chronyd[136953]: Frequency -26.296 +/- 0.150 ppm read from /var/lib/chrony/drift Oct 14 05:26:18 localhost chronyd[136953]: Loaded seccomp filter (level 2) Oct 14 05:26:18 localhost systemd[1]: Started NTP client/server. Oct 14 05:26:19 localhost systemd[1]: session-42.scope: Deactivated successfully. Oct 14 05:26:19 localhost systemd[1]: session-42.scope: Consumed 28.047s CPU time. Oct 14 05:26:19 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Oct 14 05:26:19 localhost systemd-logind[760]: Removed session 42. Oct 14 05:26:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50554 DF PROTO=TCP SPT=45666 DPT=9101 SEQ=3351297969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BBBA90000000001030307) Oct 14 05:26:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21491 DF PROTO=TCP SPT=34862 DPT=9102 SEQ=53316940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BC54E0000000001030307) Oct 14 05:26:24 localhost sshd[136969]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:26:24 localhost systemd-logind[760]: New session 43 of user zuul. Oct 14 05:26:24 localhost systemd[1]: Started Session 43 of User zuul. Oct 14 05:26:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21492 DF PROTO=TCP SPT=34862 DPT=9102 SEQ=53316940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BC9690000000001030307) Oct 14 05:26:25 localhost python3.9[137062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:26:26 localhost python3.9[137158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:27 localhost python3.9[137263]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:28 localhost python3.9[137311]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.w3f0lswr recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:29 localhost python3.9[137403]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:30 localhost python3.9[137478]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760433988.7851448-143-120861207126542/.source _original_basename=.2mga_ffj follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18408 DF PROTO=TCP SPT=57448 DPT=9100 SEQ=2914436118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BDF340000000001030307) Oct 14 05:26:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18499 DF PROTO=TCP SPT=57150 DPT=9882 SEQ=2953022968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BDFDA0000000001030307) Oct 14 05:26:30 localhost python3.9[137570]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:26:31 localhost python3.9[137662]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:31 localhost python3.9[137735]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760433990.9242582-215-94026712662399/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:26:32 localhost python3.9[137827]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:33 localhost python3.9[137900]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760433992.1509163-215-95408556321029/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:26:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18410 DF PROTO=TCP SPT=57448 DPT=9100 SEQ=2914436118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BEB2A0000000001030307) Oct 14 05:26:34 localhost python3.9[137992]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:34 localhost python3.9[138084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:35 localhost python3.9[138157]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760433994.19007-326-72672920931764/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:36 localhost python3.9[138249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:36 localhost python3.9[138322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760433995.6204257-371-82700229706602/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:36 localhost auditd[726]: Audit daemon rotating log files Oct 14 05:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18411 DF PROTO=TCP SPT=57448 DPT=9100 SEQ=2914436118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760BFAE90000000001030307) Oct 14 05:26:37 localhost python3.9[138414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:26:37 localhost systemd[1]: Reloading. Oct 14 05:26:38 localhost systemd-rc-local-generator[138437]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:26:38 localhost systemd-sysv-generator[138445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:26:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:26:38 localhost systemd[1]: Reloading. Oct 14 05:26:38 localhost systemd-rc-local-generator[138475]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:26:38 localhost systemd-sysv-generator[138478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:26:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:26:38 localhost systemd[1]: Starting EDPM Container Shutdown... Oct 14 05:26:38 localhost systemd[1]: Finished EDPM Container Shutdown. Oct 14 05:26:40 localhost python3.9[138583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61669 DF PROTO=TCP SPT=33920 DPT=9105 SEQ=2031475224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C06290000000001030307) Oct 14 05:26:40 localhost python3.9[138656]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760433999.7831635-440-221117921177841/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:41 localhost python3.9[138748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:42 localhost python3.9[138821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434001.0572002-485-117712777813563/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:42 localhost python3.9[138913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:26:42 localhost systemd[1]: Reloading. Oct 14 05:26:43 localhost systemd-rc-local-generator[138940]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:26:43 localhost systemd-sysv-generator[138943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:26:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:26:43 localhost systemd[1]: Starting Create netns directory... Oct 14 05:26:43 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:26:43 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:26:43 localhost systemd[1]: Finished Create netns directory. Oct 14 05:26:44 localhost python3.9[139074]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54926 DF PROTO=TCP SPT=46800 DPT=9101 SEQ=4218804879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C15130000000001030307) Oct 14 05:26:44 localhost network[139105]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:26:44 localhost network[139107]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:26:44 localhost network[139108]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:26:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:26:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54928 DF PROTO=TCP SPT=46800 DPT=9101 SEQ=4218804879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C21290000000001030307) Oct 14 05:26:47 localhost python3.9[139342]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:26:48 localhost python3.9[139417]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434007.3911433-608-158980696988661/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:26:49 localhost python3.9[139508]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:26:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54929 DF PROTO=TCP SPT=46800 DPT=9101 SEQ=4218804879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C30EA0000000001030307) Oct 14 05:26:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55429 DF PROTO=TCP SPT=46872 DPT=9102 SEQ=4072473451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C3A7E0000000001030307) Oct 14 05:26:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55430 DF PROTO=TCP SPT=46872 DPT=9102 SEQ=4072473451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C3E690000000001030307) Oct 14 05:27:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61229 DF PROTO=TCP SPT=39494 DPT=9100 SEQ=1011633306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C54640000000001030307) Oct 14 05:27:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19966 DF PROTO=TCP SPT=46876 DPT=9882 SEQ=1501282080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C550A0000000001030307) Oct 14 05:27:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61231 DF PROTO=TCP SPT=39494 DPT=9100 SEQ=1011633306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C606A0000000001030307) Oct 14 05:27:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61232 DF PROTO=TCP SPT=39494 DPT=9100 SEQ=1011633306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C70290000000001030307) Oct 14 05:27:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37764 DF PROTO=TCP SPT=46284 DPT=9105 SEQ=3988179217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C7B6A0000000001030307) Oct 14 05:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39144 DF PROTO=TCP SPT=53590 DPT=9101 SEQ=1604470792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C8A430000000001030307) Oct 14 05:27:14 localhost systemd[1]: session-43.scope: Deactivated successfully. Oct 14 05:27:14 localhost systemd[1]: session-43.scope: Consumed 14.382s CPU time. Oct 14 05:27:14 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Oct 14 05:27:14 localhost systemd-logind[760]: Removed session 43. Oct 14 05:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39146 DF PROTO=TCP SPT=53590 DPT=9101 SEQ=1604470792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760C96690000000001030307) Oct 14 05:27:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39147 DF PROTO=TCP SPT=53590 DPT=9101 SEQ=1604470792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CA6290000000001030307) Oct 14 05:27:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41095 DF PROTO=TCP SPT=39206 DPT=9102 SEQ=3048340852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CAFAE0000000001030307) Oct 14 05:27:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41096 DF PROTO=TCP SPT=39206 DPT=9102 SEQ=3048340852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CB3A90000000001030307) Oct 14 05:27:26 localhost sshd[139539]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:27:26 localhost systemd-logind[760]: New session 44 of user zuul. Oct 14 05:27:26 localhost systemd[1]: Started Session 44 of User zuul. Oct 14 05:27:27 localhost python3.9[139632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:27:29 localhost python3.9[139728]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:30 localhost python3.9[139833]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28617 DF PROTO=TCP SPT=60890 DPT=9100 SEQ=318420821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CC9940000000001030307) Oct 14 05:27:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23789 DF PROTO=TCP SPT=57764 DPT=9882 SEQ=263857003 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CCA3A0000000001030307) Oct 14 05:27:30 localhost python3.9[139881]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.obutjlhg recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:31 localhost python3.9[139973]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:32 localhost python3.9[140021]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.h55hqmaj recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:32 localhost python3.9[140113]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:27:33 localhost python3.9[140205]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28619 DF PROTO=TCP SPT=60890 DPT=9100 SEQ=318420821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CD5A90000000001030307) Oct 14 05:27:33 localhost python3.9[140253]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:27:34 localhost python3.9[140345]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:35 localhost python3.9[140393]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:27:35 localhost python3.9[140485]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:36 localhost python3.9[140577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:36 localhost python3.9[140625]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:37 localhost python3.9[140717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28620 DF PROTO=TCP SPT=60890 DPT=9100 SEQ=318420821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CE56A0000000001030307) Oct 14 05:27:37 localhost python3.9[140765]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:39 localhost python3.9[140857]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:27:39 localhost systemd[1]: Reloading. Oct 14 05:27:39 localhost systemd-rc-local-generator[140881]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:27:39 localhost systemd-sysv-generator[140887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:27:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:27:40 localhost python3.9[140987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58353 DF PROTO=TCP SPT=55706 DPT=9105 SEQ=463570606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CF0A90000000001030307) Oct 14 05:27:40 localhost python3.9[141035]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:41 localhost python3.9[141127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:41 localhost python3.9[141175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:42 localhost python3.9[141267]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:27:42 localhost systemd[1]: Reloading. Oct 14 05:27:42 localhost systemd-sysv-generator[141296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:27:42 localhost systemd-rc-local-generator[141293]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:27:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:27:43 localhost systemd[1]: Starting Create netns directory... Oct 14 05:27:43 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:27:43 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:27:43 localhost systemd[1]: Finished Create netns directory. Oct 14 05:27:43 localhost python3.9[141399]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:27:44 localhost network[141416]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:27:44 localhost network[141417]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:27:44 localhost network[141418]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33405 DF PROTO=TCP SPT=34160 DPT=9101 SEQ=2252730632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760CFF730000000001030307) Oct 14 05:27:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:27:46 localhost podman[141578]: 2025-10-14 09:27:46.23063295 +0000 UTC m=+0.091592992 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.33.12, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Oct 14 05:27:46 localhost podman[141578]: 2025-10-14 09:27:46.362799192 +0000 UTC m=+0.223759194 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Oct 14 05:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33407 DF PROTO=TCP SPT=34160 DPT=9101 SEQ=2252730632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D0B690000000001030307) Oct 14 05:27:51 localhost python3.9[141863]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33408 DF PROTO=TCP SPT=34160 DPT=9101 SEQ=2252730632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D1B290000000001030307) Oct 14 05:27:51 localhost python3.9[141911]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:52 localhost python3.9[142003]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:53 localhost python3.9[142095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26916 DF PROTO=TCP SPT=60736 DPT=9102 SEQ=3184933886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D24DE0000000001030307) Oct 14 05:27:53 localhost python3.9[142168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434072.4947789-608-32682371200479/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26917 DF PROTO=TCP SPT=60736 DPT=9102 SEQ=3184933886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D28E90000000001030307) Oct 14 05:27:54 localhost python3.9[142260]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 14 05:27:54 localhost systemd[1]: Starting Time & Date Service... Oct 14 05:27:54 localhost systemd[1]: Started Time & Date Service. Oct 14 05:27:55 localhost python3.9[142356]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:56 localhost python3.9[142448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:57 localhost python3.9[142521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434076.0189698-713-11696987809952/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:57 localhost python3.9[142613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:58 localhost python3.9[142686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434077.2659109-758-212911480170603/.source.yaml _original_basename=.d5bechp2 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:27:58 localhost python3.9[142778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:27:59 localhost python3.9[142853]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434078.4512978-803-268166892606564/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52090 DF PROTO=TCP SPT=54886 DPT=9100 SEQ=2165876851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D3EC40000000001030307) Oct 14 05:28:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26573 DF PROTO=TCP SPT=39832 DPT=9882 SEQ=2565165182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D3F6A0000000001030307) Oct 14 05:28:00 localhost python3.9[142945]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:01 localhost python3.9[143038]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:02 localhost python3[143131]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 14 05:28:03 localhost python3.9[143223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:03 localhost python3.9[143296]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434082.5105307-920-222730110425710/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:04 localhost python3.9[143388]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:04 localhost python3.9[143461]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434083.7709696-965-243111671415764/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:05 localhost python3.9[143553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:06 localhost python3.9[143626]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434085.0966287-1010-190735981237270/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:06 localhost python3.9[143718]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:07 localhost python3.9[143791]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434086.3900006-1055-18613476992042/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:08 localhost python3.9[143883]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:08 localhost python3.9[143956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434087.6662564-1100-17588246996988/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:09 localhost python3.9[144048]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:10 localhost python3.9[144140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:11 localhost python3.9[144235]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:12 localhost python3.9[144328]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:12 localhost python3.9[144420]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:13 localhost python3.9[144512]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 14 05:28:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25874 DF PROTO=TCP SPT=34550 DPT=9101 SEQ=36958871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D74A20000000001030307) Oct 14 05:28:14 localhost python3.9[144605]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 14 05:28:14 localhost systemd[1]: session-44.scope: Deactivated successfully. Oct 14 05:28:14 localhost systemd[1]: session-44.scope: Consumed 27.525s CPU time. Oct 14 05:28:14 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Oct 14 05:28:14 localhost systemd-logind[760]: Removed session 44. Oct 14 05:28:19 localhost sshd[144621]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:28:20 localhost systemd-logind[760]: New session 45 of user zuul. Oct 14 05:28:20 localhost systemd[1]: Started Session 45 of User zuul. Oct 14 05:28:20 localhost python3.9[144716]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 14 05:28:22 localhost python3.9[144808]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:28:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3221 DF PROTO=TCP SPT=48042 DPT=9102 SEQ=1432286829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760D9A0E0000000001030307) Oct 14 05:28:23 localhost python3.9[144902]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Oct 14 05:28:25 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 14 05:28:25 localhost python3.9[144994]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.llzm3b22 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:28:25 localhost python3.9[145071]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.llzm3b22 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434104.7181828-189-262659735806387/.source.llzm3b22 _original_basename=.99re69ml follow=False checksum=3d1ed25b73f46d4ec79674ca0a766646d7ecfda1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:28 localhost python3.9[145163]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:28:28 localhost chronyd[136953]: Selected source 23.133.168.246 (pool.ntp.org) Oct 14 05:28:29 localhost python3.9[145255]: ansible-ansible.builtin.blockinfile Invoked with block=np0005486733.localdomain,192.168.122.108,np0005486733* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPo0GfacWT5Pc+C+u+omIcLodqLCmBuNDNfCjeb037QgP4jmD3LwkBVK9lXeF6bKJmM0PzOPagPFh4T7FwHNF7Np+V7e+YWSARFeetHnxYmMZdWYyfKTaZrS25xRraxyGrunWniIhAKFUaTz7e6OjUqNe25eVURCgpvQnsWeDwm/Gk9GfpfMCIFRtF7phpUKzSaz/8IpyLG1IzRSMsUkEtoKFxbAkuuJrkD4IWeWvEqn02yWC2WFGEdpQu8kcnxIshwqf9bEa7rYrjDTR++5AuztTSbppQL+8RIclxDR3uCVxzprf9Pj2C0e2X7TVKUs1tlduvrPK7uS10NGx3CK5iUe+uX+4V+jNrpe35OBv2vzdbzR+W6ciNtdy2lWLTou66Fm+/a3XwfJQb66dWQrLIyc6T64D8BysHjA8ER5TZ7N8AZoFZ8tNRzPgNWFZhjzoXdYisTvN9CjcpLgVpzekjeQS4BNNzh7bs+FPdB49TSf65NLzBIhWNqHT8weDoO58=#012np0005486733.localdomain,192.168.122.108,np0005486733* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuLxBOOZ2E9wHKjXOMebj4OZ7Ol59V1QC+zoNcmtlAO#012np0005486733.localdomain,192.168.122.108,np0005486733* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFER+xRcpNFhmN1n2ALXYX9o2Jz+2SveGOJaTigZLIqTfd5sCQS0J9/MB5gF5Mfkep3gloJeQ8cIc77b5oI9Z0M=#012np0005486728.localdomain,192.168.122.103,np0005486728* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDr2nlXCVxp/8oDgdtx78rfaKpbpZ2BVPZ6HGLZUj0EA3A0bpv/vCkjK3KQT3TI7v1XfpgRbj08G0BbDhcTce9c8drn6X7lMpxvdMYZKKMTHnRs3mq9RsfEuWH3Q8Aa22LiA7rLwzVM2bbdbUcx/55pt3si8ariZ274Pzbprq7RrthEdE9xo5SDFIi+VJNQfQa+igaLblAAoG8qz+WChOAEmghfOAe4F7vBmidVxT92aYUE03zpWtqox4fE1U2dC0FMJ6Jro1ONj8KKCyEL+oLEbWFbPR4ynCyRvGaMIYh+9scB5yCf7vgPXNqu8sG+gR9i5wG43Nnh+76+XX/k+4Vyw/VeNANTjdiGvBcWmj1LLMDetoxZ5AdfklGaQq5qmrIvGqvIAGd7NgdwwWWw2umuIru3mi/5Z0H5I1uhLgTdknibTJSkhkkt/sBiBuyAXM3/HneFzlxDlYgA1xwdZeNnfiH010AO2W8pkWmWsYdMOEOBsM3SmGWtUuGKApwHcs8=#012np0005486728.localdomain,192.168.122.103,np0005486728* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFZgIdoXZ49/AzXU+oHb4E9FVVTK2wJq4yrcPHjFQfqz#012np0005486728.localdomain,192.168.122.103,np0005486728* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMFbWHTiTeGA1XRQ7iWIJKpfCQXOTyNXNwCMjLTErss66DUcnzonE/JU1XrsOoRs149r+P1WVqvqD4ixVbvoNVw=#012np0005486729.localdomain,192.168.122.104,np0005486729* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuTpRqp6mqKsQmynNLG8q8Bb4GSKNLRdYVfi81dV1W3aIPFsswo/C9+5nbZA1YVPY02cdXFps4EmIQl2tQ0sKmdo4HGexnhUJjKuyXFTu0kCYUasXCE5+sSjRVUCF4RfD3+6jQ9w6hHM1R3JkkhPZtKs4ykqH+8Gr2B918BdDuVaujfMmVWMv8M46JDuDO9vGPlWpM+xZkFZ1zjG2I2UIvWLkEnVdta7QIgxIPTlX7rOokadGrkAcIYb87wONg2vJiTPWO4ht4yHUIvTGNHSTmCXK0sdQLiZzjR2P/k67s1KMeWjaWAe3NXygnpvgENx9Qf9NkOYhvz8j+xZXat4Pa/I38V79XAjE3nWEF/KM6a4nKK9Lz5GXOvsQ+LIXBBY6HSAqBY4Lc21xwCJxEoO5Iftn56HzDFA+iyex5FMeT12ANKmVF9D+NHdaiZ3d5iPW6cOPqph1UjWsofejhEt0dxmCbippl74SWTZey9dQ3TKM9BGf2QfH1GvasiC+CsVU=#012np0005486729.localdomain,192.168.122.104,np0005486729* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGxeq2D+w4uY3tKP5yQyoEBem0b2s2hPrJdTzpIuGozW#012np0005486729.localdomain,192.168.122.104,np0005486729* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEXD7o/mHlyE5FPGkrRBHPWn+AwId2YyEBOT/QILn8qgF7Mym76ZEJFAVw5zzuZA1ef4oRAHz26eg7bkU00wtUI=#012np0005486731.localdomain,192.168.122.106,np0005486731* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCirnE0NbUtG1POhhB+AhKCgxEghhJb/WUMq5UfTpoI7+sU48jNxRyEvlJ9WLGLD82QYzFzvYceQHGF3QzqwIybk7JFKNvYYEOkz9hG//Xjh6A/3qZ0QptW0dWlBpSs0CuOATe19vBa98AfD1qNMYOAwwjlRDvjVW17VALcKjVesDK4LNkVfCSX9cK7Gdd1LfEkwQwxiTTZeSd91DSx5XIm3hz9RcMpxpCgc3snA81FXTTb4G1v39rycXuWjjlp/2B4CRlgPrIb6u1X/hkN0uxSMiwMQG7fZladvZi8RTRyt2EmTR0l8f0eDeuN1gLfOFVlQSfj33xH8/2G2s4IUhbudf732i4GKxgy5WBMiH2DVHzoO7LGdKlYKRvxgNG8qx68hOAzHokMnmaHnKlTsXNPph6MD/ufoeHaEG35xMkewSoY70MzDny/Z9lllfTTs+Yi5YEO22s5EoS6KK9C1+WShW9TELIuj5X8P1VeD+LlKJIwbLQzEHLc1irbnJ2RgUc=#012np0005486731.localdomain,192.168.122.106,np0005486731* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKTfHN8Jmqa+7PsF3vFOpO1ETsyPHFXELxpBTIpPfddD#012np0005486731.localdomain,192.168.122.106,np0005486731* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1jfw99DJUv0J3Z9AkynHW2Up1hO/BlEGnvsE92l9HdbBSEY1YHu6GqkahHkqrmTxGZ5NofIRR2e10OiKQQIV0=#012np0005486732.localdomain,192.168.122.107,np0005486732* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM+kpIg8Y4xlC9n9pfBoVDeeU3WOfZT4Yf4ib8bb9MSMyOwJpLVbkpe3nLg73heYlLISwD3ojybTo9jDmNS7Pq+q5bGue4oqLk7f5B7IMwrmkfzjKYQpGMLL7FdErlDs6IP2jQ82E+uJ7M54Kv5g0rr+blVacsnYetzjJM26r3UcKTdOjJyIHuvQWa4IzNJRydr8s9//7Orf7269xlmVoqyAkcrhzcewCVeaK7VOrIcy3oKzOtwYpQmSxUumuX5rxE8KoCn4Ag0V3Mpp7hqN2xrry1hJN1J7yXSYaF1pc4MJKvCK6k0VqK4dY6CppsQvx2HW1s/Ib5UxJ/+JypjsqwYcSL7BSesfCtHtY8Tn1bbI+nm+nbMw1VIECq94FvZldDnxbaCQDP7dkFxqJaZebSFX+XAsRqJq4M8/rAm2gFUtCisiggasuEgfBfODBwb5+EYGNBCS/72Xs3b1h+hoMh0XCocdkTpzbr40FK6djLBdZXBAt7/Vwy0fTpC9G8H+s=#012np0005486732.localdomain,192.168.122.107,np0005486732* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP18wQUsgo1UBda3H3zHF+IC2kyNZ51YCgvk01Gn/dse#012np0005486732.localdomain,192.168.122.107,np0005486732* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCuamP8L0Ru51uRKZu8uXCmbi6mkdIaPGzpAzsbiDGTvO4mQOVAysASx3inqIoCaiUKcwRI9OHoaL30bXMeCfgY=#012np0005486730.localdomain,192.168.122.105,np0005486730* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtk5xAqdm3oDp772fF0Tcpwt7lZCIcJjfcDVjKALPT5gaSA/ogGG08ba03OQjSa4fktVIIeYQdRVzWIscOCoWMDa+vnXRStoi9DI+3rLz3nQvH190s8hPq6KxWR8DzGiqF8GwF1Kfuc7wz4c9jdElv6iWUfZuxCSLQfPSRYOw9IIII6knfTuRjQAIdmUJwnjN9K5n2n8rISg0VPd9kUHZR8jL+zFPsv5XkwfW/t5CEMmx6WG8w8Q6gY+yoeU4qINcRzFjKx/s6ParctRSYzJDPYEyhrgqQUesBDU4nyxRDpFilkeZI46TfqC9bG5bKTVfVy6qnAgkt4vg6buwszUTRdx6a0v68zWAwKGNAHRKS/HQ/CRe7CHYqsob7w41V4RvOtP5kz+dniINeT/K71sL3ZwcciRuGM10ayjaxBw7HOMJHi9RWrPWads3ubzTErcORb9mdWdlSomqfEGB8Ig/tKeFTipyN39TKKHLD+o6Tjnxqb3imMsE1kZWQOzHbFhE=#012np0005486730.localdomain,192.168.122.105,np0005486730* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDq3H3Fetnx28JUaDyUkNg0MiLRsl8k1oSo01bE4tTx4#012np0005486730.localdomain,192.168.122.105,np0005486730* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3n1UwhGEzXCVrYMBza4JMt6lsbT42NITUCGasB/Q88juksY/4w67C7ec1FV7QYfygjevsjTj8uJGh0384TqeQ=#012 create=True mode=0644 path=/tmp/ansible.llzm3b22 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34061 DF PROTO=TCP SPT=50272 DPT=9100 SEQ=2412989838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760DB3F40000000001030307) Oct 14 05:28:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56467 DF PROTO=TCP SPT=55390 DPT=9882 SEQ=1088774434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760DB49A0000000001030307) Oct 14 05:28:31 localhost python3.9[145347]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.llzm3b22' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:32 localhost python3.9[145441]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.llzm3b22 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47901 DF PROTO=TCP SPT=59056 DPT=9105 SEQ=3884254664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760DBF3F0000000001030307) Oct 14 05:28:33 localhost systemd[1]: session-45.scope: Deactivated successfully. Oct 14 05:28:33 localhost systemd[1]: session-45.scope: Consumed 4.237s CPU time. Oct 14 05:28:33 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Oct 14 05:28:33 localhost systemd-logind[760]: Removed session 45. Oct 14 05:28:39 localhost sshd[145457]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:28:39 localhost systemd-logind[760]: New session 46 of user zuul. Oct 14 05:28:39 localhost systemd[1]: Started Session 46 of User zuul. Oct 14 05:28:40 localhost python3.9[145550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:28:41 localhost python3.9[145646]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 05:28:43 localhost python3.9[145740]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:28:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52377 DF PROTO=TCP SPT=55574 DPT=9101 SEQ=1989057382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760DE9D30000000001030307) Oct 14 05:28:44 localhost python3.9[145833]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:45 localhost python3.9[145926]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:28:46 localhost python3.9[146020]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:28:47 localhost python3.9[146115]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:28:47 localhost systemd[1]: session-46.scope: Deactivated successfully. Oct 14 05:28:47 localhost systemd[1]: session-46.scope: Consumed 3.949s CPU time. Oct 14 05:28:47 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Oct 14 05:28:47 localhost systemd-logind[760]: Removed session 46. Oct 14 05:28:52 localhost sshd[146208]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:28:52 localhost systemd-logind[760]: New session 47 of user zuul. Oct 14 05:28:52 localhost systemd[1]: Started Session 47 of User zuul. Oct 14 05:28:53 localhost python3.9[146301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:28:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52975 DF PROTO=TCP SPT=37000 DPT=9102 SEQ=2398266067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E0F3E0000000001030307) Oct 14 05:28:54 localhost python3.9[146397]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:28:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52976 DF PROTO=TCP SPT=37000 DPT=9102 SEQ=2398266067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E132A0000000001030307) Oct 14 05:28:55 localhost python3.9[146451]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 14 05:28:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52977 DF PROTO=TCP SPT=37000 DPT=9102 SEQ=2398266067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E1B290000000001030307) Oct 14 05:28:59 localhost python3.9[146543]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:29:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16261 DF PROTO=TCP SPT=58792 DPT=9100 SEQ=3247997395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E29240000000001030307) Oct 14 05:29:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40328 DF PROTO=TCP SPT=44954 DPT=9882 SEQ=456394106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E29CA0000000001030307) Oct 14 05:29:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52978 DF PROTO=TCP SPT=37000 DPT=9102 SEQ=2398266067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E2AE90000000001030307) Oct 14 05:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16262 DF PROTO=TCP SPT=58792 DPT=9100 SEQ=3247997395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E2D2A0000000001030307) Oct 14 05:29:01 localhost python3.9[146636]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40329 DF PROTO=TCP SPT=44954 DPT=9882 SEQ=456394106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E2DE90000000001030307) Oct 14 05:29:02 localhost python3.9[146728]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:03 localhost python3.9[146820]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1749 DF PROTO=TCP SPT=44530 DPT=9105 SEQ=2822666402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E346F0000000001030307) Oct 14 05:29:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16263 DF PROTO=TCP SPT=58792 DPT=9100 SEQ=3247997395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E35290000000001030307) Oct 14 05:29:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40330 DF PROTO=TCP SPT=44954 DPT=9882 SEQ=456394106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E35E90000000001030307) Oct 14 05:29:03 localhost python3.9[146910]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1750 DF PROTO=TCP SPT=44530 DPT=9105 SEQ=2822666402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E38690000000001030307) Oct 14 05:29:04 localhost python3.9[147000]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:29:05 localhost python3.9[147092]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:29:05 localhost systemd[1]: session-47.scope: Deactivated successfully. Oct 14 05:29:05 localhost systemd[1]: session-47.scope: Consumed 9.173s CPU time. Oct 14 05:29:05 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Oct 14 05:29:05 localhost systemd-logind[760]: Removed session 47. Oct 14 05:29:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1751 DF PROTO=TCP SPT=44530 DPT=9105 SEQ=2822666402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E406A0000000001030307) Oct 14 05:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16264 DF PROTO=TCP SPT=58792 DPT=9100 SEQ=3247997395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E44E90000000001030307) Oct 14 05:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40331 DF PROTO=TCP SPT=44954 DPT=9882 SEQ=456394106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E45AA0000000001030307) Oct 14 05:29:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1752 DF PROTO=TCP SPT=44530 DPT=9105 SEQ=2822666402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E50290000000001030307) Oct 14 05:29:11 localhost sshd[147107]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:29:11 localhost systemd-logind[760]: New session 48 of user zuul. Oct 14 05:29:11 localhost systemd[1]: Started Session 48 of User zuul. Oct 14 05:29:12 localhost python3.9[147200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:29:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61984 DF PROTO=TCP SPT=40300 DPT=9101 SEQ=527535537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E5F030000000001030307) Oct 14 05:29:15 localhost python3.9[147296]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61985 DF PROTO=TCP SPT=40300 DPT=9101 SEQ=527535537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E63290000000001030307) Oct 14 05:29:15 localhost python3.9[147388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:16 localhost python3.9[147461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434155.4152882-185-199076285879940/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61986 DF PROTO=TCP SPT=40300 DPT=9101 SEQ=527535537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E6B290000000001030307) Oct 14 05:29:17 localhost python3.9[147553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:18 localhost python3.9[147646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:18 localhost python3.9[147719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434157.6780982-256-942666532305/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:19 localhost python3.9[147811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:20 localhost python3.9[147903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:20 localhost python3.9[147976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434159.6383655-332-8271028367619/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61987 DF PROTO=TCP SPT=40300 DPT=9101 SEQ=527535537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E7AEA0000000001030307) Oct 14 05:29:21 localhost python3.9[148068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:22 localhost python3.9[148160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:22 localhost python3.9[148233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434161.572415-405-232660724925150/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:23 localhost python3.9[148325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38903 DF PROTO=TCP SPT=51546 DPT=9102 SEQ=3599541251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E846E0000000001030307) Oct 14 05:29:24 localhost python3.9[148417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:24 localhost python3.9[148490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434163.5323136-480-268605053694738/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38904 DF PROTO=TCP SPT=51546 DPT=9102 SEQ=3599541251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E886A0000000001030307) Oct 14 05:29:25 localhost python3.9[148582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:25 localhost python3.9[148674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:26 localhost python3.9[148747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434165.4978304-554-149021002409788/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:27 localhost python3.9[148839]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:27 localhost python3.9[148931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:28 localhost python3.9[149004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434167.374893-622-38800444689908/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:29 localhost python3.9[149096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:30 localhost python3.9[149188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56864 DF PROTO=TCP SPT=46370 DPT=9100 SEQ=3568874287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E9E540000000001030307) Oct 14 05:29:30 localhost python3.9[149261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434169.5233362-700-167385616154550/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2c0c9af0a7c9617e778807fbf142c88d84b85267 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62351 DF PROTO=TCP SPT=41272 DPT=9882 SEQ=4283807601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760E9EF90000000001030307) Oct 14 05:29:30 localhost systemd[1]: session-48.scope: Deactivated successfully. Oct 14 05:29:30 localhost systemd[1]: session-48.scope: Consumed 12.180s CPU time. Oct 14 05:29:30 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Oct 14 05:29:30 localhost systemd-logind[760]: Removed session 48. Oct 14 05:29:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56866 DF PROTO=TCP SPT=46370 DPT=9100 SEQ=3568874287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EAA690000000001030307) Oct 14 05:29:36 localhost sshd[149276]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:29:36 localhost systemd-logind[760]: New session 49 of user zuul. Oct 14 05:29:36 localhost systemd[1]: Started Session 49 of User zuul. Oct 14 05:29:37 localhost python3.9[149371]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56867 DF PROTO=TCP SPT=46370 DPT=9100 SEQ=3568874287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EBA290000000001030307) Oct 14 05:29:38 localhost python3.9[149463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:38 localhost python3.9[149536]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434177.5784981-62-13174380428882/.source.conf _original_basename=ceph.conf follow=False checksum=3ea08ebaa38e66fdc9487ab3279546d8d5630636 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:39 localhost python3.9[149628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:29:40 localhost python3.9[149701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434179.1273196-62-62913475349950/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=0991400062f1e3522feec6859340320816889889 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:29:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58462 DF PROTO=TCP SPT=35406 DPT=9105 SEQ=1274494461 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EC56A0000000001030307) Oct 14 05:29:40 localhost systemd[1]: session-49.scope: Deactivated successfully. Oct 14 05:29:40 localhost systemd[1]: session-49.scope: Consumed 2.316s CPU time. Oct 14 05:29:40 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Oct 14 05:29:40 localhost systemd-logind[760]: Removed session 49. Oct 14 05:29:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12024 DF PROTO=TCP SPT=32784 DPT=9101 SEQ=620737090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760ED4320000000001030307) Oct 14 05:29:46 localhost sshd[149716]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:29:46 localhost systemd-logind[760]: New session 50 of user zuul. Oct 14 05:29:46 localhost systemd[1]: Started Session 50 of User zuul. Oct 14 05:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12026 DF PROTO=TCP SPT=32784 DPT=9101 SEQ=620737090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EE0290000000001030307) Oct 14 05:29:47 localhost python3.9[149809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:29:48 localhost python3.9[149905]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:49 localhost python3.9[149997]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:29:50 localhost python3.9[150117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:29:51 localhost python3.9[150256]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 14 05:29:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12027 DF PROTO=TCP SPT=32784 DPT=9101 SEQ=620737090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EEFEA0000000001030307) Oct 14 05:29:52 localhost python3.9[150348]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:29:53 localhost python3.9[150402]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46012 DF PROTO=TCP SPT=57172 DPT=9102 SEQ=4123205868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EF99E0000000001030307) Oct 14 05:29:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46013 DF PROTO=TCP SPT=57172 DPT=9102 SEQ=4123205868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760EFDA90000000001030307) Oct 14 05:29:57 localhost python3.9[150496]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:29:58 localhost python3[150591]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Oct 14 05:29:59 localhost python3.9[150683]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:00 localhost python3.9[150775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19338 DF PROTO=TCP SPT=32860 DPT=9100 SEQ=1893446883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F13840000000001030307) Oct 14 05:30:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52569 DF PROTO=TCP SPT=46948 DPT=9882 SEQ=241888978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F14290000000001030307) Oct 14 05:30:00 localhost python3.9[150823]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:01 localhost python3.9[150915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:01 localhost python3.9[150963]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7gbq19yz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:02 localhost python3.9[151055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:03 localhost python3.9[151103]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19340 DF PROTO=TCP SPT=32860 DPT=9100 SEQ=1893446883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F1FA90000000001030307) Oct 14 05:30:03 localhost python3.9[151195]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:05 localhost python3[151288]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 14 05:30:06 localhost python3.9[151380]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19341 DF PROTO=TCP SPT=32860 DPT=9100 SEQ=1893446883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F2F690000000001030307) Oct 14 05:30:07 localhost python3.9[151455]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434205.6160314-431-2028609915921/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:08 localhost python3.9[151547]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:09 localhost python3.9[151622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434208.1178114-476-104603845331874/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:10 localhost python3.9[151714]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34392 DF PROTO=TCP SPT=47634 DPT=9105 SEQ=1301102981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F3AA90000000001030307) Oct 14 05:30:10 localhost python3.9[151789]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434209.5967774-521-263015093768320/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:11 localhost python3.9[151881]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:11 localhost python3.9[151956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434210.9137213-566-276514871219852/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:12 localhost python3.9[152048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:13 localhost python3.9[152123]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434212.2279365-611-62530366817207/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:14 localhost python3.9[152215]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21153 DF PROTO=TCP SPT=57702 DPT=9101 SEQ=3020046004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F49640000000001030307) Oct 14 05:30:15 localhost python3.9[152307]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:15 localhost python3.9[152402]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:16 localhost python3.9[152494]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21155 DF PROTO=TCP SPT=57702 DPT=9101 SEQ=3020046004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F55690000000001030307) Oct 14 05:30:17 localhost python3.9[152587]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:30:18 localhost python3.9[152681]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:18 localhost python3.9[152776]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:20 localhost python3.9[152866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:30:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21156 DF PROTO=TCP SPT=57702 DPT=9101 SEQ=3020046004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F65290000000001030307) Oct 14 05:30:21 localhost python3.9[152959]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005486731.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:2c:0c:de:0a" external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:21 localhost ovs-vsctl[152960]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005486731.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:2c:0c:de:0a external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Oct 14 05:30:22 localhost python3.9[153052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:23 localhost python3.9[153145]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:30:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35744 DF PROTO=TCP SPT=56332 DPT=9102 SEQ=1611306320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F6ECE0000000001030307) Oct 14 05:30:23 localhost python3.9[153239]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:24 localhost python3.9[153331]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35745 DF PROTO=TCP SPT=56332 DPT=9102 SEQ=1611306320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F72E90000000001030307) Oct 14 05:30:25 localhost python3.9[153379]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:25 localhost python3.9[153471]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:26 localhost python3.9[153519]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:26 localhost python3.9[153611]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:27 localhost python3.9[153703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:28 localhost python3.9[153751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:28 localhost python3.9[153843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:29 localhost python3.9[153891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:30 localhost python3.9[153983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:30:30 localhost systemd[1]: Reloading. Oct 14 05:30:30 localhost systemd-sysv-generator[154011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:30:30 localhost systemd-rc-local-generator[154006]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:30:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:30:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21025 DF PROTO=TCP SPT=54370 DPT=9100 SEQ=3331420335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F88B40000000001030307) Oct 14 05:30:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42636 DF PROTO=TCP SPT=54444 DPT=9882 SEQ=767093906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F89590000000001030307) Oct 14 05:30:31 localhost python3.9[154113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:31 localhost python3.9[154161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:32 localhost python3.9[154253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:32 localhost python3.9[154301]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21027 DF PROTO=TCP SPT=54370 DPT=9100 SEQ=3331420335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760F94AA0000000001030307) Oct 14 05:30:33 localhost python3.9[154393]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:30:33 localhost systemd[1]: Reloading. Oct 14 05:30:33 localhost systemd-sysv-generator[154424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:30:33 localhost systemd-rc-local-generator[154419]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:30:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:30:33 localhost systemd[1]: Starting Create netns directory... Oct 14 05:30:33 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:30:33 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:30:33 localhost systemd[1]: Finished Create netns directory. Oct 14 05:30:34 localhost python3.9[154527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:35 localhost python3.9[154619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:35 localhost python3.9[154692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434234.9750135-1343-34014528581792/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:36 localhost python3.9[154784]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21028 DF PROTO=TCP SPT=54370 DPT=9100 SEQ=3331420335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FA4690000000001030307) Oct 14 05:30:37 localhost python3.9[154876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:30:38 localhost python3.9[154951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434237.2018664-1418-17358030865589/.source.json _original_basename=.swsx_0pi follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:38 localhost python3.9[155043]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19877 DF PROTO=TCP SPT=48366 DPT=9105 SEQ=1176267595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FAFAA0000000001030307) Oct 14 05:30:41 localhost python3.9[155300]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Oct 14 05:30:42 localhost python3.9[155392]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:30:43 localhost python3.9[155484]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:30:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27781 DF PROTO=TCP SPT=60500 DPT=9101 SEQ=1925329315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FBE930000000001030307) Oct 14 05:30:47 localhost python3[155603]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27783 DF PROTO=TCP SPT=60500 DPT=9101 SEQ=1925329315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FCAA90000000001030307) Oct 14 05:30:47 localhost python3[155603]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "8808c3fcdd35e5a4eacb6d3f5ed89688361f4338056395008c191e57b6afaf7d",#012 "Digest": "sha256:31464fe4defe28fe4896a946cfe50ee0b001d1a03081174d9f69e4a313b0f21e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:31464fe4defe28fe4896a946cfe50ee0b001d1a03081174d9f69e4a313b0f21e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-13T13:00:39.999290816Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345598922,#012 "VirtualSize": 345598922,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/9353b4c9b77a60c02d5cd3c8f9b94918c7a607156d2f7e1365b30ffe1fa49c89/diff:/var/lib/containers/storage/overlay/ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2c35d1af0a6e73cbcf6c04a576d2e6a150aeaa6ae9408c81b2003edd71d6ae59",#012 "sha256:941d6c62fda0ad5502f66ca2e71ffe6e3f64b2a5a0db75dac0075fa750a883f2",#012 "sha256:a82e45bff332403f46d24749948c917d1a37ea0b8ab922688da4f6038dc99c66"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-13T12:28:42.843286399Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843354051Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843394192Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843417133Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843442193Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843461914Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:43.236856724Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:17.539596691Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:21.007092512Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util- Oct 14 05:30:47 localhost podman[155650]: 2025-10-14 09:30:47.674625553 +0000 UTC m=+0.088802830 container remove 403ebe54dd79e16cd09867ca1c4dd8675b1262103186305639224ee4ee87cd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 05:30:47 localhost python3[155603]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Oct 14 05:30:47 localhost podman[155664]: Oct 14 05:30:47 localhost podman[155664]: 2025-10-14 09:30:47.777479097 +0000 UTC m=+0.084224949 container create 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:30:47 localhost podman[155664]: 2025-10-14 09:30:47.736751419 +0000 UTC m=+0.043497291 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 14 05:30:47 localhost python3[155603]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 14 05:30:48 localhost python3.9[155794]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:30:49 localhost python3.9[155888]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:49 localhost python3.9[155934]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:30:50 localhost python3.9[156025]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434249.8937016-1682-113359911259026/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:30:51 localhost python3.9[156071]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:30:51 localhost systemd[1]: Reloading. Oct 14 05:30:51 localhost systemd-rc-local-generator[156112]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:30:51 localhost systemd-sysv-generator[156115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:30:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:30:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27784 DF PROTO=TCP SPT=60500 DPT=9101 SEQ=1925329315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FDA690000000001030307) Oct 14 05:30:52 localhost python3.9[156194]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:30:52 localhost systemd[1]: Reloading. Oct 14 05:30:52 localhost systemd-rc-local-generator[156241]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:30:52 localhost systemd-sysv-generator[156247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:30:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:30:52 localhost systemd[1]: Starting ovn_controller container... Oct 14 05:30:52 localhost systemd[1]: tmp-crun.ntXGfs.mount: Deactivated successfully. Oct 14 05:30:52 localhost systemd[1]: Started libcrun container. Oct 14 05:30:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab85c6689ef102f5edd1eabaf998704c51d245cef93fc48d097754048bc1ac8/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 14 05:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:30:52 localhost podman[156272]: 2025-10-14 09:30:52.560368719 +0000 UTC m=+0.147365207 container init 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:30:52 localhost ovn_controller[156286]: + sudo -E kolla_set_configs Oct 14 05:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:30:52 localhost podman[156272]: 2025-10-14 09:30:52.592416861 +0000 UTC m=+0.179413359 container start 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:30:52 localhost edpm-start-podman-container[156272]: ovn_controller Oct 14 05:30:52 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 05:30:52 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 05:30:52 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 05:30:52 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 05:30:52 localhost podman[156293]: 2025-10-14 09:30:52.68394423 +0000 UTC m=+0.086645388 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:30:52 localhost podman[156293]: 2025-10-14 09:30:52.69907112 +0000 UTC m=+0.101772268 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true) Oct 14 05:30:52 localhost podman[156293]: unhealthy Oct 14 05:30:52 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:30:52 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Failed with result 'exit-code'. Oct 14 05:30:52 localhost edpm-start-podman-container[156271]: Creating additional drop-in dependency for "ovn_controller" (328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec) Oct 14 05:30:52 localhost systemd[1]: Reloading. Oct 14 05:30:52 localhost systemd[156319]: Queued start job for default target Main User Target. Oct 14 05:30:52 localhost systemd[156319]: Created slice User Application Slice. Oct 14 05:30:52 localhost systemd[156319]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 05:30:52 localhost systemd[156319]: Started Daily Cleanup of User's Temporary Directories. Oct 14 05:30:52 localhost systemd[156319]: Reached target Paths. Oct 14 05:30:52 localhost systemd[156319]: Reached target Timers. Oct 14 05:30:52 localhost systemd[156319]: Starting D-Bus User Message Bus Socket... Oct 14 05:30:52 localhost systemd[156319]: Starting Create User's Volatile Files and Directories... Oct 14 05:30:52 localhost systemd[156319]: Listening on D-Bus User Message Bus Socket. Oct 14 05:30:52 localhost systemd[156319]: Reached target Sockets. Oct 14 05:30:52 localhost systemd[156319]: Finished Create User's Volatile Files and Directories. Oct 14 05:30:52 localhost systemd[156319]: Reached target Basic System. Oct 14 05:30:52 localhost systemd[156319]: Reached target Main User Target. Oct 14 05:30:52 localhost systemd[156319]: Startup finished in 126ms. Oct 14 05:30:52 localhost systemd-sysv-generator[156378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:30:52 localhost systemd-rc-local-generator[156374]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:30:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:30:53 localhost systemd[1]: Started User Manager for UID 0. Oct 14 05:30:53 localhost systemd[1]: Started ovn_controller container. Oct 14 05:30:53 localhost systemd[1]: Started Session c11 of User root. Oct 14 05:30:53 localhost ovn_controller[156286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:30:53 localhost ovn_controller[156286]: INFO:__main__:Validating config file Oct 14 05:30:53 localhost ovn_controller[156286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:30:53 localhost ovn_controller[156286]: INFO:__main__:Writing out command to execute Oct 14 05:30:53 localhost systemd[1]: session-c11.scope: Deactivated successfully. Oct 14 05:30:53 localhost ovn_controller[156286]: ++ cat /run_command Oct 14 05:30:53 localhost ovn_controller[156286]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 14 05:30:53 localhost ovn_controller[156286]: + ARGS= Oct 14 05:30:53 localhost ovn_controller[156286]: + sudo kolla_copy_cacerts Oct 14 05:30:53 localhost systemd[1]: Started Session c12 of User root. Oct 14 05:30:53 localhost ovn_controller[156286]: + [[ ! -n '' ]] Oct 14 05:30:53 localhost ovn_controller[156286]: + . kolla_extend_start Oct 14 05:30:53 localhost ovn_controller[156286]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Oct 14 05:30:53 localhost ovn_controller[156286]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 14 05:30:53 localhost ovn_controller[156286]: + umask 0022 Oct 14 05:30:53 localhost ovn_controller[156286]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Oct 14 05:30:53 localhost systemd[1]: session-c12.scope: Deactivated successfully. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8] Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00004|main|INFO|OVS IDL reconnected, force recompute. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00013|main|INFO|OVS feature set changed, force recompute. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00021|main|INFO|OVS feature set changed, force recompute. Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 14 05:30:53 localhost ovn_controller[156286]: 2025-10-14T09:30:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 14 05:30:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37717 DF PROTO=TCP SPT=36200 DPT=9102 SEQ=4235900056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FE3FF0000000001030307) Oct 14 05:30:53 localhost python3.9[156485]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:53 localhost ovs-vsctl[156486]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Oct 14 05:30:54 localhost python3.9[156578]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:54 localhost ovs-vsctl[156580]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Oct 14 05:30:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37718 DF PROTO=TCP SPT=36200 DPT=9102 SEQ=4235900056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FE7E90000000001030307) Oct 14 05:30:55 localhost python3.9[156673]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:30:55 localhost ovs-vsctl[156674]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Oct 14 05:30:56 localhost systemd[1]: session-50.scope: Deactivated successfully. Oct 14 05:30:56 localhost systemd[1]: session-50.scope: Consumed 41.395s CPU time. Oct 14 05:30:56 localhost systemd-logind[760]: Session 50 logged out. Waiting for processes to exit. Oct 14 05:30:56 localhost systemd-logind[760]: Removed session 50. Oct 14 05:31:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57138 DF PROTO=TCP SPT=34724 DPT=9100 SEQ=4084618898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FFDE40000000001030307) Oct 14 05:31:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1569 DF PROTO=TCP SPT=34740 DPT=9882 SEQ=3431006105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A760FFE8A0000000001030307) Oct 14 05:31:02 localhost sshd[156689]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:31:02 localhost systemd-logind[760]: New session 52 of user zuul. Oct 14 05:31:02 localhost systemd[1]: Started Session 52 of User zuul. Oct 14 05:31:03 localhost python3.9[156782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:31:03 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 05:31:03 localhost systemd[156319]: Activating special unit Exit the Session... Oct 14 05:31:03 localhost systemd[156319]: Stopped target Main User Target. Oct 14 05:31:03 localhost systemd[156319]: Stopped target Basic System. Oct 14 05:31:03 localhost systemd[156319]: Stopped target Paths. Oct 14 05:31:03 localhost systemd[156319]: Stopped target Sockets. Oct 14 05:31:03 localhost systemd[156319]: Stopped target Timers. Oct 14 05:31:03 localhost systemd[156319]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 05:31:03 localhost systemd[156319]: Closed D-Bus User Message Bus Socket. Oct 14 05:31:03 localhost systemd[156319]: Stopped Create User's Volatile Files and Directories. Oct 14 05:31:03 localhost systemd[156319]: Removed slice User Application Slice. Oct 14 05:31:03 localhost systemd[156319]: Reached target Shutdown. Oct 14 05:31:03 localhost systemd[156319]: Finished Exit the Session. Oct 14 05:31:03 localhost systemd[156319]: Reached target Exit the Session. Oct 14 05:31:03 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 05:31:03 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 05:31:03 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 05:31:03 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 05:31:03 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 05:31:03 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 05:31:03 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 05:31:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57140 DF PROTO=TCP SPT=34724 DPT=9100 SEQ=4084618898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761009EA0000000001030307) Oct 14 05:31:04 localhost python3.9[156881]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:05 localhost python3.9[156973]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:05 localhost python3.9[157065]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:06 localhost python3.9[157157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:07 localhost python3.9[157249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57141 DF PROTO=TCP SPT=34724 DPT=9100 SEQ=4084618898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761019A90000000001030307) Oct 14 05:31:07 localhost python3.9[157339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:31:08 localhost python3.9[157431]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 14 05:31:09 localhost python3.9[157521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51667 DF PROTO=TCP SPT=56780 DPT=9105 SEQ=4208470177 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761024E90000000001030307) Oct 14 05:31:10 localhost python3.9[157594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434269.0787983-218-99643614941425/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:11 localhost python3.9[157684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:11 localhost python3.9[157758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434270.6601546-263-188155031787433/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:12 localhost python3.9[157850]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:31:13 localhost python3.9[157904]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:31:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55334 DF PROTO=TCP SPT=42222 DPT=9101 SEQ=1412957619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761033C30000000001030307) Oct 14 05:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55336 DF PROTO=TCP SPT=42222 DPT=9101 SEQ=1412957619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76103FEA0000000001030307) Oct 14 05:31:17 localhost python3.9[157998]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:31:18 localhost ovn_controller[156286]: 2025-10-14T09:31:18Z|00023|memory|INFO|15004 kB peak resident set size after 25.0 seconds Oct 14 05:31:18 localhost ovn_controller[156286]: 2025-10-14T09:31:18Z|00024|memory|INFO|idl-cells-OVN_Southbound:3978 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:10 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Oct 14 05:31:18 localhost python3.9[158091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:19 localhost python3.9[158162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434278.2546363-374-1444768239055/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:19 localhost python3.9[158252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:20 localhost python3.9[158323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434279.3921664-374-103164836688277/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55337 DF PROTO=TCP SPT=42222 DPT=9101 SEQ=1412957619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76104FAA0000000001030307) Oct 14 05:31:22 localhost python3.9[158413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:22 localhost python3.9[158484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434281.599496-506-199466345700247/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:23 localhost python3.9[158574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:31:23 localhost systemd[1]: tmp-crun.kxSVdh.mount: Deactivated successfully. Oct 14 05:31:23 localhost podman[158609]: 2025-10-14 09:31:23.549044401 +0000 UTC m=+0.083425653 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 14 05:31:23 localhost podman[158609]: 2025-10-14 09:31:23.592176019 +0000 UTC m=+0.126557271 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Oct 14 05:31:23 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:31:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48745 DF PROTO=TCP SPT=51884 DPT=9102 SEQ=1737238364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610592E0000000001030307) Oct 14 05:31:23 localhost python3.9[158671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434282.8907745-506-203722174063949/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:24 localhost python3.9[158761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:31:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48746 DF PROTO=TCP SPT=51884 DPT=9102 SEQ=1737238364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76105D2A0000000001030307) Oct 14 05:31:25 localhost python3.9[158855]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:25 localhost python3.9[158947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:26 localhost python3.9[158995]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:26 localhost python3.9[159087]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:27 localhost python3.9[159135]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:28 localhost python3.9[159227]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:28 localhost python3.9[159319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:29 localhost python3.9[159367]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:30 localhost python3.9[159459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51813 DF PROTO=TCP SPT=59668 DPT=9100 SEQ=2353146708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761073140000000001030307) Oct 14 05:31:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59672 DF PROTO=TCP SPT=55138 DPT=9882 SEQ=641756593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761073BA0000000001030307) Oct 14 05:31:30 localhost python3.9[159507]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:31 localhost python3.9[159599]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:31:31 localhost systemd[1]: Reloading. Oct 14 05:31:31 localhost systemd-rc-local-generator[159620]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:31:31 localhost systemd-sysv-generator[159623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:31:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:31:33 localhost python3.9[159729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51815 DF PROTO=TCP SPT=59668 DPT=9100 SEQ=2353146708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76107F290000000001030307) Oct 14 05:31:33 localhost python3.9[159777]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:34 localhost python3.9[159869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:35 localhost python3.9[159917]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:35 localhost python3.9[160009]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:31:35 localhost systemd[1]: Reloading. Oct 14 05:31:36 localhost systemd-sysv-generator[160037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:31:36 localhost systemd-rc-local-generator[160032]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:31:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:31:36 localhost systemd[1]: Starting Create netns directory... Oct 14 05:31:36 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:31:36 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:31:36 localhost systemd[1]: Finished Create netns directory. Oct 14 05:31:37 localhost python3.9[160143]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51816 DF PROTO=TCP SPT=59668 DPT=9100 SEQ=2353146708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76108EE90000000001030307) Oct 14 05:31:37 localhost python3.9[160235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:38 localhost python3.9[160308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434297.4299824-959-148359369814389/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:39 localhost python3.9[160400]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:31:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42029 DF PROTO=TCP SPT=54472 DPT=9105 SEQ=3442990233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76109A290000000001030307) Oct 14 05:31:40 localhost python3.9[160492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:31:41 localhost python3.9[160567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434299.9882586-1034-118820134648864/.source.json _original_basename=.j0nac3zl follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:41 localhost python3.9[160659]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48980 DF PROTO=TCP SPT=36848 DPT=9101 SEQ=2913290104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610A8F30000000001030307) Oct 14 05:31:44 localhost python3.9[160916]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Oct 14 05:31:45 localhost python3.9[161008]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:31:46 localhost python3.9[161100]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48982 DF PROTO=TCP SPT=36848 DPT=9101 SEQ=2913290104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610B4E90000000001030307) Oct 14 05:31:50 localhost python3[161219]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:31:50 localhost python3[161219]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "c6d1b3e4cccd28b7c818995b8e8c01f80bc6d31844f018079ac974a1bc7ff587",#012 "Digest": "sha256:cc78c4a7fbd7c7348d3ee41420dd7c42d83eb1e76a8db6bb94a538a5d2f2c424",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:cc78c4a7fbd7c7348d3ee41420dd7c42d83eb1e76a8db6bb94a538a5d2f2c424"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-13T12:47:50.032440747Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 783982852,#012 "VirtualSize": 783982852,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41/diff:/var/lib/containers/storage/overlay/a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534/diff:/var/lib/containers/storage/overlay/0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861/diff:/var/lib/containers/storage/overlay/ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2c35d1af0a6e73cbcf6c04a576d2e6a150aeaa6ae9408c81b2003edd71d6ae59",#012 "sha256:3ad61591f8d467f7db4e096e1991f274fe1d4f8ad685b553dacb57c5e894eab0",#012 "sha256:921303cda5c9d8779e6603d3888ac24385c443b872bec9c3138835df3416e3df",#012 "sha256:c059b89efb40f3097e4f1e24153e4ed15b8a660accccb7f6b341c8900767b90e",#012 "sha256:e4b986e48b4f8d2e3d4ecc6d2e17b8ac252dfafd4e4fec6074bd29e67b374a2f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-13T12:28:42.843286399Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843354051Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843394192Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843417133Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843442193Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843461914Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:43.236856724Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:17.539596691Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Oct 14 05:31:50 localhost podman[161268]: 2025-10-14 09:31:50.643802598 +0000 UTC m=+0.097922729 container remove 9bd6bac5661c2b7128509bc213e4046d68365b5d6f5d946582fe10c7b428365c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b594b6ed5677fe328472ea80ffe520cb'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 14 05:31:50 localhost python3[161219]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Oct 14 05:31:50 localhost podman[161282]: Oct 14 05:31:50 localhost podman[161282]: 2025-10-14 09:31:50.766914148 +0000 UTC m=+0.094553296 container create 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:31:50 localhost podman[161282]: 2025-10-14 09:31:50.725482445 +0000 UTC m=+0.053121623 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 05:31:50 localhost python3[161219]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 05:31:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48983 DF PROTO=TCP SPT=36848 DPT=9101 SEQ=2913290104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610C4A90000000001030307) Oct 14 05:31:51 localhost python3.9[161409]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:31:52 localhost python3.9[161503]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:52 localhost python3.9[161577]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:31:53 localhost python3.9[161702]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434313.0246472-1298-232046657067274/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:31:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52768 DF PROTO=TCP SPT=60396 DPT=9102 SEQ=1138944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610CE5E0000000001030307) Oct 14 05:31:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:31:54 localhost systemd[1]: tmp-crun.J2tekX.mount: Deactivated successfully. Oct 14 05:31:54 localhost podman[161763]: 2025-10-14 09:31:54.085711894 +0000 UTC m=+0.105857455 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS) Oct 14 05:31:54 localhost podman[161763]: 2025-10-14 09:31:54.122087963 +0000 UTC m=+0.142233484 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:31:54 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:31:54 localhost python3.9[161762]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:31:54 localhost systemd[1]: Reloading. Oct 14 05:31:54 localhost systemd-rc-local-generator[161815]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:31:54 localhost systemd-sysv-generator[161819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:31:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:31:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52769 DF PROTO=TCP SPT=60396 DPT=9102 SEQ=1138944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610D2690000000001030307) Oct 14 05:31:55 localhost python3.9[161870]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:31:55 localhost systemd[1]: Reloading. Oct 14 05:31:55 localhost systemd-rc-local-generator[161900]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:31:55 localhost systemd-sysv-generator[161903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:31:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:31:55 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 14 05:31:55 localhost systemd[1]: Started libcrun container. Oct 14 05:31:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43c0ac2f76851b0939cd8c2625115986ac5d1758fe742259462d250f16e4fdb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 14 05:31:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43c0ac2f76851b0939cd8c2625115986ac5d1758fe742259462d250f16e4fdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 05:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:31:55 localhost podman[161912]: 2025-10-14 09:31:55.809886032 +0000 UTC m=+0.151401740 container init 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + sudo -E kolla_set_configs Oct 14 05:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:31:55 localhost podman[161912]: 2025-10-14 09:31:55.853856148 +0000 UTC m=+0.195371816 container start 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:31:55 localhost edpm-start-podman-container[161912]: ovn_metadata_agent Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Validating config file Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Copying service configuration files Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Writing out command to execute Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: ++ cat /run_command Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + CMD=neutron-ovn-metadata-agent Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + ARGS= Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + sudo kolla_copy_cacerts Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + [[ ! -n '' ]] Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + . kolla_extend_start Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: Running command: 'neutron-ovn-metadata-agent' Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + umask 0022 Oct 14 05:31:55 localhost ovn_metadata_agent[161927]: + exec neutron-ovn-metadata-agent Oct 14 05:31:55 localhost podman[161935]: 2025-10-14 09:31:55.948746451 +0000 UTC m=+0.090681091 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 05:31:55 localhost edpm-start-podman-container[161911]: Creating additional drop-in dependency for "ovn_metadata_agent" (6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242) Oct 14 05:31:55 localhost podman[161935]: 2025-10-14 09:31:55.981950671 +0000 UTC m=+0.123885311 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 05:31:55 localhost systemd[1]: Reloading. Oct 14 05:31:56 localhost systemd-sysv-generator[162004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:31:56 localhost systemd-rc-local-generator[161999]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:31:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:31:56 localhost systemd[1]: tmp-crun.UT9eEg.mount: Deactivated successfully. Oct 14 05:31:56 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:31:56 localhost systemd[1]: Started ovn_metadata_agent container. Oct 14 05:31:56 localhost systemd-logind[760]: Session 52 logged out. Waiting for processes to exit. Oct 14 05:31:56 localhost systemd[1]: session-52.scope: Deactivated successfully. Oct 14 05:31:56 localhost systemd[1]: session-52.scope: Consumed 32.607s CPU time. Oct 14 05:31:56 localhost systemd-logind[760]: Removed session 52. Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.553 161932 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.554 161932 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.554 161932 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.554 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.554 161932 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.554 161932 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.555 161932 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.556 161932 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.557 161932 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.558 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.559 161932 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.560 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.561 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.562 161932 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.563 161932 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.564 161932 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.565 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.566 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.567 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.568 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.569 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.570 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.571 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.572 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.573 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.574 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.575 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.576 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.577 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.578 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.579 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.580 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.581 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.582 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.583 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.584 161932 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.585 161932 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.593 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.593 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.593 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.593 161932 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.593 161932 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.607 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 5830d1b9-dd16-4a23-879b-f28430ab4793 (UUID: 5830d1b9-dd16-4a23-879b-f28430ab4793) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.623 161932 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.623 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.623 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.623 161932 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.625 161932 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.628 161932 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.636 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '5830d1b9-dd16-4a23-879b-f28430ab4793'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '2f887f1f-d4d2-554e-b4e5-b8eeb3607e7a', 'neutron:ovn-metadata-sb-cfg': '1'}, name=5830d1b9-dd16-4a23-879b-f28430ab4793, nb_cfg_timestamp=1760434261252, nb_cfg=3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.636 161932 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.637 161932 INFO oslo_service.service [-] Starting 1 workers#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.640 161932 DEBUG oslo_service.service [-] Started child 162030 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.642 161932 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpa9z17xyf/privsep.sock']#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.645 162030 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-158115'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.668 162030 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.669 162030 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.669 162030 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.672 162030 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.673 162030 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 14 05:31:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:57.688 162030 INFO eventlet.wsgi.server [-] (162030) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.240 161932 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.241 161932 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpa9z17xyf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.132 162035 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.134 162035 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.136 162035 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.136 162035 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162035#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.244 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[d3995361-1c6f-410a-a8e4-9aede026c7a0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.691 162035 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.691 162035 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:31:58 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:58.691 162035 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.160 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[da57b005-0128-4510-8eb7-90ec2a41cb5f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.162 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, column=external_ids, values=({'neutron:ovn-metadata-id': '2f887f1f-d4d2-554e-b4e5-b8eeb3607e7a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.163 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.164 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.181 161932 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.182 161932 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.182 161932 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.182 161932 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.182 161932 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.182 161932 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.183 161932 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.183 161932 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.183 161932 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.183 161932 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.184 161932 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.184 161932 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.185 161932 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.185 161932 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.185 161932 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.185 161932 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.185 161932 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.186 161932 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.186 161932 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.186 161932 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.186 161932 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.187 161932 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.187 161932 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.187 161932 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.187 161932 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.188 161932 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.188 161932 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.188 161932 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.188 161932 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.189 161932 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.189 161932 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.189 161932 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.189 161932 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.189 161932 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.190 161932 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.190 161932 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.190 161932 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.190 161932 DEBUG oslo_service.service [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.191 161932 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.191 161932 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.191 161932 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.191 161932 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.192 161932 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.192 161932 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.192 161932 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.192 161932 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.193 161932 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.193 161932 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.193 161932 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.193 161932 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.193 161932 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.194 161932 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.194 161932 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.194 161932 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.194 161932 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.194 161932 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.195 161932 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.195 161932 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.195 161932 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.195 161932 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.195 161932 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.196 161932 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.196 161932 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.196 161932 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.196 161932 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.197 161932 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.197 161932 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.197 161932 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.197 161932 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.197 161932 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.198 161932 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.198 161932 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.198 161932 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.198 161932 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.199 161932 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.199 161932 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.199 161932 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.199 161932 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.199 161932 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.200 161932 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.200 161932 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.200 161932 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.200 161932 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.201 161932 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.201 161932 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.201 161932 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.201 161932 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.201 161932 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.202 161932 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.202 161932 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.202 161932 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.202 161932 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.202 161932 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.203 161932 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.203 161932 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.203 161932 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.203 161932 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.204 161932 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.204 161932 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.204 161932 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.204 161932 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.204 161932 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.205 161932 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.205 161932 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.205 161932 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.205 161932 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.205 161932 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.206 161932 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.206 161932 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.206 161932 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.206 161932 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.207 161932 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.207 161932 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.207 161932 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.207 161932 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.208 161932 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.208 161932 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.208 161932 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.208 161932 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.209 161932 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.209 161932 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.209 161932 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.209 161932 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.210 161932 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.210 161932 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.210 161932 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.210 161932 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.211 161932 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.211 161932 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.211 161932 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.211 161932 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.212 161932 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.212 161932 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.212 161932 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.212 161932 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.212 161932 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.213 161932 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.213 161932 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.213 161932 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.213 161932 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.214 161932 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.214 161932 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.214 161932 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.214 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.214 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.215 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.215 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.215 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.215 161932 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.215 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.216 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.216 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.216 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.216 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.217 161932 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.218 161932 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.218 161932 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.218 161932 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.218 161932 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.219 161932 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.219 161932 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.219 161932 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.219 161932 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.219 161932 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.220 161932 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.220 161932 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.220 161932 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.220 161932 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.220 161932 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.221 161932 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.221 161932 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.221 161932 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.221 161932 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.222 161932 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.222 161932 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.222 161932 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.222 161932 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.222 161932 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.223 161932 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.223 161932 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.223 161932 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.223 161932 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.224 161932 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.224 161932 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.224 161932 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.224 161932 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.224 161932 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.225 161932 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.225 161932 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.225 161932 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.225 161932 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.226 161932 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.226 161932 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.226 161932 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.226 161932 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.226 161932 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.227 161932 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.227 161932 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.227 161932 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.227 161932 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.227 161932 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.228 161932 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.228 161932 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.228 161932 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.228 161932 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.229 161932 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.230 161932 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.231 161932 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.232 161932 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.233 161932 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.234 161932 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.235 161932 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.236 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.237 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.238 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.239 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.240 161932 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.241 161932 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.241 161932 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.241 161932 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.241 161932 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:31:59 localhost ovn_metadata_agent[161927]: 2025-10-14 09:31:59.241 161932 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 14 05:32:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59444 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=1787786825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610E8440000000001030307) Oct 14 05:32:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14874 DF PROTO=TCP SPT=57334 DPT=9882 SEQ=3829835729 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610E8EA0000000001030307) Oct 14 05:32:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59446 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=1787786825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7610F4690000000001030307) Oct 14 05:32:07 localhost sshd[162040]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:32:07 localhost systemd-logind[760]: New session 53 of user zuul. Oct 14 05:32:07 localhost systemd[1]: Started Session 53 of User zuul. Oct 14 05:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59447 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=1787786825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761104290000000001030307) Oct 14 05:32:08 localhost python3.9[162133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:32:09 localhost python3.9[162229]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:10 localhost python3.9[162334]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11626 DF PROTO=TCP SPT=49812 DPT=9105 SEQ=2622925716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76110F690000000001030307) Oct 14 05:32:10 localhost systemd[1]: libpod-7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58.scope: Deactivated successfully. Oct 14 05:32:10 localhost podman[162335]: 2025-10-14 09:32:10.458617585 +0000 UTC m=+0.073583388 container died 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 14 05:32:10 localhost podman[162335]: 2025-10-14 09:32:10.487217712 +0000 UTC m=+0.102183485 container cleanup 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, version=17.1.9, release=2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 14 05:32:10 localhost podman[162351]: 2025-10-14 09:32:10.543404909 +0000 UTC m=+0.078421427 container remove 7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, release=2, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 14 05:32:10 localhost systemd[1]: libpod-conmon-7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58.scope: Deactivated successfully. Oct 14 05:32:11 localhost systemd[1]: var-lib-containers-storage-overlay-7ea6eba3b41452cab8e715ebf0cbb227001a53fa044ec7fc4361e175f631660e-merged.mount: Deactivated successfully. Oct 14 05:32:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d683555ddc49ea74ad7fbe11504bed90fcd6202e385f3d4df1541c789ffea58-userdata-shm.mount: Deactivated successfully. Oct 14 05:32:11 localhost python3.9[162456]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:32:11 localhost systemd[1]: Reloading. Oct 14 05:32:11 localhost systemd-sysv-generator[162480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:32:11 localhost systemd-rc-local-generator[162476]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:32:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:32:12 localhost python3.9[162581]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:32:13 localhost network[162598]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:32:13 localhost network[162599]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:32:13 localhost network[162600]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:32:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:32:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32020 DF PROTO=TCP SPT=33472 DPT=9101 SEQ=3304087627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76111E220000000001030307) Oct 14 05:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32022 DF PROTO=TCP SPT=33472 DPT=9101 SEQ=3304087627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76112A2A0000000001030307) Oct 14 05:32:18 localhost python3.9[162802]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:18 localhost systemd[1]: Reloading. Oct 14 05:32:18 localhost systemd-sysv-generator[162835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:32:18 localhost systemd-rc-local-generator[162830]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:32:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:32:18 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Oct 14 05:32:19 localhost python3.9[162934]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:21 localhost python3.9[163027]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32023 DF PROTO=TCP SPT=33472 DPT=9101 SEQ=3304087627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761139E90000000001030307) Oct 14 05:32:22 localhost python3.9[163120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:22 localhost python3.9[163213]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39422 DF PROTO=TCP SPT=60882 DPT=9102 SEQ=2058503577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611438D0000000001030307) Oct 14 05:32:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:32:24 localhost systemd[1]: tmp-crun.CBMp2Q.mount: Deactivated successfully. Oct 14 05:32:24 localhost podman[163306]: 2025-10-14 09:32:24.407205678 +0000 UTC m=+0.092170807 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller) Oct 14 05:32:24 localhost podman[163306]: 2025-10-14 09:32:24.444317854 +0000 UTC m=+0.129283003 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009) Oct 14 05:32:24 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:32:24 localhost python3.9[163307]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39423 DF PROTO=TCP SPT=60882 DPT=9102 SEQ=2058503577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761147A90000000001030307) Oct 14 05:32:26 localhost python3.9[163425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:32:26 localhost podman[163427]: 2025-10-14 09:32:26.513972654 +0000 UTC m=+0.080426848 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 14 05:32:26 localhost podman[163427]: 2025-10-14 09:32:26.525043557 +0000 UTC m=+0.091497741 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 14 05:32:26 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:32:27 localhost python3.9[163535]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:28 localhost python3.9[163627]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:28 localhost python3.9[163719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:29 localhost python3.9[163811]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:29 localhost python3.9[163903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60302 DF PROTO=TCP SPT=43800 DPT=9100 SEQ=44174060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76115D740000000001030307) Oct 14 05:32:30 localhost python3.9[163995]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6122 DF PROTO=TCP SPT=36190 DPT=9882 SEQ=203461884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76115E190000000001030307) Oct 14 05:32:31 localhost python3.9[164087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:31 localhost python3.9[164179]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:32 localhost python3.9[164271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:33 localhost python3.9[164363]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60304 DF PROTO=TCP SPT=43800 DPT=9100 SEQ=44174060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761169690000000001030307) Oct 14 05:32:33 localhost python3.9[164455]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:34 localhost python3.9[164547]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:34 localhost python3.9[164639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:35 localhost python3.9[164731]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:32:36 localhost python3.9[164823]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:37 localhost python3.9[164915]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60305 DF PROTO=TCP SPT=43800 DPT=9100 SEQ=44174060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761179290000000001030307) Oct 14 05:32:38 localhost python3.9[165007]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:32:38 localhost systemd[1]: Reloading. Oct 14 05:32:38 localhost systemd-rc-local-generator[165036]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:32:38 localhost systemd-sysv-generator[165039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:32:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:32:39 localhost python3.9[165136]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:40 localhost python3.9[165229]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45119 DF PROTO=TCP SPT=33586 DPT=9105 SEQ=2520492244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611846A0000000001030307) Oct 14 05:32:40 localhost python3.9[165322]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:32:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5658 writes, 25K keys, 5658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5658 writes, 708 syncs, 7.99 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55644d8b0850#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Oct 14 05:32:41 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 76.0 (253 of 333 items), suggesting rotation. Oct 14 05:32:41 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:32:41 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:32:41 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:32:42 localhost python3.9[165416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:42 localhost python3.9[165509]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:43 localhost python3.9[165602]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:44 localhost python3.9[165695]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2698 DF PROTO=TCP SPT=41818 DPT=9101 SEQ=796582836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761193530000000001030307) Oct 14 05:32:45 localhost python3.9[165788]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Oct 14 05:32:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:32:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4839 writes, 21K keys, 4839 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4839 writes, 659 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557c1d2f22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Oct 14 05:32:46 localhost python3.9[165881]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 14 05:32:47 localhost python3.9[165979]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 14 05:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2700 DF PROTO=TCP SPT=41818 DPT=9101 SEQ=796582836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76119F690000000001030307) Oct 14 05:32:48 localhost python3.9[166079]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:32:49 localhost python3.9[166133]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:32:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2701 DF PROTO=TCP SPT=41818 DPT=9101 SEQ=796582836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611AF290000000001030307) Oct 14 05:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29223 DF PROTO=TCP SPT=50324 DPT=9102 SEQ=2531205571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611B8BE0000000001030307) Oct 14 05:32:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29224 DF PROTO=TCP SPT=50324 DPT=9102 SEQ=2531205571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611BCA90000000001030307) Oct 14 05:32:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:32:55 localhost systemd[1]: tmp-crun.qIiAU6.mount: Deactivated successfully. Oct 14 05:32:55 localhost podman[166266]: 2025-10-14 09:32:55.559173483 +0000 UTC m=+0.096166287 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009) Oct 14 05:32:55 localhost podman[166266]: 2025-10-14 09:32:55.627314561 +0000 UTC m=+0.164307365 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:32:55 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:32:57 localhost systemd[1]: tmp-crun.qUGOA0.mount: Deactivated successfully. Oct 14 05:32:57 localhost podman[166310]: 2025-10-14 09:32:57.553475132 +0000 UTC m=+0.092221950 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:32:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:32:57.587 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:32:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:32:57.588 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:32:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:32:57.588 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:32:57 localhost podman[166310]: 2025-10-14 09:32:57.588083655 +0000 UTC m=+0.126830523 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:32:57 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:33:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34573 DF PROTO=TCP SPT=59062 DPT=9100 SEQ=3305712819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611D2A40000000001030307) Oct 14 05:33:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25426 DF PROTO=TCP SPT=51868 DPT=9882 SEQ=3470228974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611D3490000000001030307) Oct 14 05:33:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34575 DF PROTO=TCP SPT=59062 DPT=9100 SEQ=3305712819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611DEA90000000001030307) Oct 14 05:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34576 DF PROTO=TCP SPT=59062 DPT=9100 SEQ=3305712819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611EE690000000001030307) Oct 14 05:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18240 DF PROTO=TCP SPT=43286 DPT=9105 SEQ=3912875290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7611F9A90000000001030307) Oct 14 05:33:14 localhost kernel: SELinux: Converting 2747 SID table entries... Oct 14 05:33:14 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:14 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:14 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:14 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:14 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:14 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:14 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43770 DF PROTO=TCP SPT=35116 DPT=9101 SEQ=1664034813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761208830000000001030307) Oct 14 05:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43772 DF PROTO=TCP SPT=35116 DPT=9101 SEQ=1664034813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761214A90000000001030307) Oct 14 05:33:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43773 DF PROTO=TCP SPT=35116 DPT=9101 SEQ=1664034813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761224690000000001030307) Oct 14 05:33:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47549 DF PROTO=TCP SPT=32926 DPT=9102 SEQ=2124718307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76122DEE0000000001030307) Oct 14 05:33:24 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 14 05:33:24 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:24 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:24 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:24 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:24 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:24 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:24 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:33:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47550 DF PROTO=TCP SPT=32926 DPT=9102 SEQ=2124718307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761231EA0000000001030307) Oct 14 05:33:26 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Oct 14 05:33:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:33:26 localhost podman[167306]: 2025-10-14 09:33:26.566051506 +0000 UTC m=+0.094987175 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:33:26 localhost podman[167306]: 2025-10-14 09:33:26.603052724 +0000 UTC m=+0.131988383 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:33:26 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:33:28 localhost podman[167330]: 2025-10-14 09:33:28.536349708 +0000 UTC m=+0.078142290 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:33:28 localhost podman[167330]: 2025-10-14 09:33:28.566943432 +0000 UTC m=+0.108736024 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 05:33:28 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:33:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3303 DF PROTO=TCP SPT=53216 DPT=9100 SEQ=606119226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761247D40000000001030307) Oct 14 05:33:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12585 DF PROTO=TCP SPT=36804 DPT=9882 SEQ=2035453534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761248790000000001030307) Oct 14 05:33:32 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 14 05:33:32 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:32 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:32 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:32 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:32 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:32 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:32 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:33:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3305 DF PROTO=TCP SPT=53216 DPT=9100 SEQ=606119226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761253E90000000001030307) Oct 14 05:33:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3306 DF PROTO=TCP SPT=53216 DPT=9100 SEQ=606119226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761263A90000000001030307) Oct 14 05:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33241 DF PROTO=TCP SPT=45482 DPT=9105 SEQ=3860304142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76126EE90000000001030307) Oct 14 05:33:40 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 14 05:33:40 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:40 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:40 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:40 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:40 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:40 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:40 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:33:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2186 DF PROTO=TCP SPT=46910 DPT=9101 SEQ=1248578006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76127DB20000000001030307) Oct 14 05:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2188 DF PROTO=TCP SPT=46910 DPT=9101 SEQ=1248578006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761289A90000000001030307) Oct 14 05:33:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2189 DF PROTO=TCP SPT=46910 DPT=9101 SEQ=1248578006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761299690000000001030307) Oct 14 05:33:51 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 14 05:33:51 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:51 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:51 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:51 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:51 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:51 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:51 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41076 DF PROTO=TCP SPT=41032 DPT=9102 SEQ=2372150171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612A31E0000000001030307) Oct 14 05:33:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41077 DF PROTO=TCP SPT=41032 DPT=9102 SEQ=2372150171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612A72A0000000001030307) Oct 14 05:33:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=23 res=1 Oct 14 05:33:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:33:57 localhost systemd[1]: tmp-crun.eRI08p.mount: Deactivated successfully. Oct 14 05:33:57 localhost podman[167462]: 2025-10-14 09:33:57.248239815 +0000 UTC m=+0.095058572 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:33:57 localhost podman[167462]: 2025-10-14 09:33:57.285081765 +0000 UTC m=+0.131900502 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 05:33:57 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:33:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:33:57.589 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:33:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:33:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:33:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:33:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:33:59 localhost podman[167489]: 2025-10-14 09:33:59.529110264 +0000 UTC m=+0.071265409 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Oct 14 05:33:59 localhost podman[167489]: 2025-10-14 09:33:59.584041696 +0000 UTC m=+0.126196811 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:33:59 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:33:59 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 14 05:33:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:33:59 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:33:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:33:59 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:33:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:33:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:33:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:34:00 localhost systemd[1]: Reloading. Oct 14 05:34:00 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=24 res=1 Oct 14 05:34:00 localhost systemd-sysv-generator[167545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:34:00 localhost systemd-rc-local-generator[167539]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:34:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26372 DF PROTO=TCP SPT=51486 DPT=9100 SEQ=3881615710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612BD030000000001030307) Oct 14 05:34:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:34:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33780 DF PROTO=TCP SPT=45666 DPT=9882 SEQ=435281430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612BDAA0000000001030307) Oct 14 05:34:00 localhost systemd[1]: Reloading. Oct 14 05:34:00 localhost systemd-sysv-generator[167583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:34:00 localhost systemd-rc-local-generator[167578]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:34:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:34:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26374 DF PROTO=TCP SPT=51486 DPT=9100 SEQ=3881615710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612C9290000000001030307) Oct 14 05:34:06 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=148.113.210.254 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=52412 DF PROTO=TCP SPT=53302 DPT=9090 SEQ=3496254022 ACK=0 WINDOW=64240 RES=0x00 SYN URGP=0 OPT (020405B40402080A2CD130F20000000001030307) Oct 14 05:34:09 localhost kernel: SELinux: Converting 2751 SID table entries... Oct 14 05:34:09 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 14 05:34:09 localhost kernel: SELinux: policy capability open_perms=1 Oct 14 05:34:09 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 14 05:34:09 localhost kernel: SELinux: policy capability always_check_network=0 Oct 14 05:34:09 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 14 05:34:09 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 14 05:34:09 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 14 05:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8741 DF PROTO=TCP SPT=44206 DPT=9105 SEQ=1579133854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612E4290000000001030307) Oct 14 05:34:13 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 14 05:34:13 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=25 res=1 Oct 14 05:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19547 DF PROTO=TCP SPT=41080 DPT=9101 SEQ=1666180040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612F2E30000000001030307) Oct 14 05:34:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19549 DF PROTO=TCP SPT=41080 DPT=9101 SEQ=1666180040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7612FEE90000000001030307) Oct 14 05:34:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19550 DF PROTO=TCP SPT=41080 DPT=9101 SEQ=1666180040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76130EA90000000001030307) Oct 14 05:34:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=354 DF PROTO=TCP SPT=36782 DPT=9102 SEQ=2154774760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613184E0000000001030307) Oct 14 05:34:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=355 DF PROTO=TCP SPT=36782 DPT=9102 SEQ=2154774760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76131C690000000001030307) Oct 14 05:34:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:34:27 localhost podman[168602]: 2025-10-14 09:34:27.567438565 +0000 UTC m=+0.093273975 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 05:34:27 localhost systemd[1]: tmp-crun.5l6a0h.mount: Deactivated successfully. Oct 14 05:34:27 localhost podman[168602]: 2025-10-14 09:34:27.638165238 +0000 UTC m=+0.164000648 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:34:27 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:34:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47815 DF PROTO=TCP SPT=38924 DPT=9100 SEQ=579900803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761332330000000001030307) Oct 14 05:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:34:30 localhost podman[170681]: 2025-10-14 09:34:30.531417963 +0000 UTC m=+0.074840104 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:34:30 localhost podman[170681]: 2025-10-14 09:34:30.56510091 +0000 UTC m=+0.108523071 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Oct 14 05:34:30 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:34:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60621 DF PROTO=TCP SPT=58802 DPT=9882 SEQ=135661347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761332DA0000000001030307) Oct 14 05:34:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47817 DF PROTO=TCP SPT=38924 DPT=9100 SEQ=579900803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76133E290000000001030307) Oct 14 05:34:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47818 DF PROTO=TCP SPT=38924 DPT=9100 SEQ=579900803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76134DE90000000001030307) Oct 14 05:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4390 DF PROTO=TCP SPT=52654 DPT=9105 SEQ=1327675426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761359290000000001030307) Oct 14 05:34:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1188 DF PROTO=TCP SPT=53382 DPT=9101 SEQ=700056476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761368140000000001030307) Oct 14 05:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1190 DF PROTO=TCP SPT=53382 DPT=9101 SEQ=700056476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761374290000000001030307) Oct 14 05:34:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1191 DF PROTO=TCP SPT=53382 DPT=9101 SEQ=700056476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761383E90000000001030307) Oct 14 05:34:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53087 DF PROTO=TCP SPT=39724 DPT=9102 SEQ=3700090677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76138D7E0000000001030307) Oct 14 05:34:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53088 DF PROTO=TCP SPT=39724 DPT=9102 SEQ=3700090677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761391690000000001030307) Oct 14 05:34:56 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 14 05:34:56 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 14 05:34:56 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 14 05:34:56 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 14 05:34:56 localhost systemd[1]: Stopping sshd-keygen.target... Oct 14 05:34:56 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:34:56 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:34:56 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 14 05:34:56 localhost systemd[1]: Reached target sshd-keygen.target. Oct 14 05:34:56 localhost systemd[1]: Starting OpenSSH server daemon... Oct 14 05:34:56 localhost sshd[185265]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:34:56 localhost systemd[1]: Started OpenSSH server daemon. Oct 14 05:34:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:34:57.589 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:34:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:34:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:34:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:34:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:34:57 localhost podman[185556]: 2025-10-14 09:34:57.805372376 +0000 UTC m=+0.116534076 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 05:34:57 localhost podman[185556]: 2025-10-14 09:34:57.912999084 +0000 UTC m=+0.224160764 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Oct 14 05:34:57 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:34:58 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 05:34:58 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 05:34:58 localhost systemd[1]: Reloading. Oct 14 05:34:58 localhost systemd-sysv-generator[185657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:34:58 localhost systemd-rc-local-generator[185652]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:34:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:34:58 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 05:34:58 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 05:35:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15746 DF PROTO=TCP SPT=44734 DPT=9100 SEQ=1435660077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613A7640000000001030307) Oct 14 05:35:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22767 DF PROTO=TCP SPT=33120 DPT=9882 SEQ=2437358754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613A80A0000000001030307) Oct 14 05:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:35:00 localhost podman[189397]: 2025-10-14 09:35:00.783003479 +0000 UTC m=+0.070792847 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:35:00 localhost podman[189397]: 2025-10-14 09:35:00.812353581 +0000 UTC m=+0.100142979 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Oct 14 05:35:00 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:35:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15748 DF PROTO=TCP SPT=44734 DPT=9100 SEQ=1435660077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613B3690000000001030307) Oct 14 05:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15749 DF PROTO=TCP SPT=44734 DPT=9100 SEQ=1435660077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613C3290000000001030307) Oct 14 05:35:08 localhost python3.9[193717]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:35:09 localhost systemd[1]: Reloading. Oct 14 05:35:09 localhost systemd-rc-local-generator[193943]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:09 localhost systemd-sysv-generator[193952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:10 localhost python3.9[194318]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:35:10 localhost systemd[1]: Reloading. Oct 14 05:35:10 localhost systemd-sysv-generator[194463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:10 localhost systemd-rc-local-generator[194460]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2635 DF PROTO=TCP SPT=58760 DPT=9105 SEQ=1842033862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613CE690000000001030307) Oct 14 05:35:10 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 05:35:10 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 05:35:10 localhost systemd[1]: man-db-cache-update.service: Consumed 15.036s CPU time. Oct 14 05:35:10 localhost systemd[1]: run-r8193f36f73cb4bf4b3fa055571a4497f.service: Deactivated successfully. Oct 14 05:35:10 localhost systemd[1]: run-r94eee69d0f704c9bad66ca8b603500a0.service: Deactivated successfully. Oct 14 05:35:12 localhost python3.9[194587]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:35:13 localhost systemd[1]: Reloading. Oct 14 05:35:13 localhost systemd-sysv-generator[194620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:13 localhost systemd-rc-local-generator[194617]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29659 DF PROTO=TCP SPT=36294 DPT=9101 SEQ=4206752036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613DD430000000001030307) Oct 14 05:35:14 localhost python3.9[194736]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:35:14 localhost systemd[1]: Reloading. Oct 14 05:35:15 localhost systemd-rc-local-generator[194761]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:15 localhost systemd-sysv-generator[194765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:16 localhost python3.9[194885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:16 localhost systemd[1]: Reloading. Oct 14 05:35:16 localhost systemd-rc-local-generator[194911]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:16 localhost systemd-sysv-generator[194915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:17 localhost python3.9[195034]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:17 localhost systemd[1]: Reloading. Oct 14 05:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29661 DF PROTO=TCP SPT=36294 DPT=9101 SEQ=4206752036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613E9690000000001030307) Oct 14 05:35:17 localhost systemd-rc-local-generator[195062]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:17 localhost systemd-sysv-generator[195068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:18 localhost python3.9[195183]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:18 localhost systemd[1]: Reloading. Oct 14 05:35:18 localhost systemd-sysv-generator[195211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:18 localhost systemd-rc-local-generator[195208]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:19 localhost python3.9[195332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:20 localhost python3.9[195445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:20 localhost systemd[1]: Reloading. Oct 14 05:35:20 localhost systemd-sysv-generator[195477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:20 localhost systemd-rc-local-generator[195474]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29662 DF PROTO=TCP SPT=36294 DPT=9101 SEQ=4206752036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7613F9290000000001030307) Oct 14 05:35:21 localhost python3.9[195595]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:35:21 localhost systemd[1]: Reloading. Oct 14 05:35:21 localhost systemd-rc-local-generator[195622]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:35:21 localhost systemd-sysv-generator[195625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:35:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:35:22 localhost python3.9[195744]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:23 localhost python3.9[195857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38785 DF PROTO=TCP SPT=33760 DPT=9102 SEQ=4199029219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761402AE0000000001030307) Oct 14 05:35:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38786 DF PROTO=TCP SPT=33760 DPT=9102 SEQ=4199029219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761406A90000000001030307) Oct 14 05:35:26 localhost python3.9[195970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:26 localhost python3.9[196083]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:28 localhost python3.9[196196]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:35:28 localhost podman[196198]: 2025-10-14 09:35:28.142187748 +0000 UTC m=+0.083346421 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:35:28 localhost podman[196198]: 2025-10-14 09:35:28.21612976 +0000 UTC m=+0.157288443 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 05:35:28 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:35:29 localhost python3.9[196334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58927 DF PROTO=TCP SPT=37910 DPT=9100 SEQ=3083652655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76141C940000000001030307) Oct 14 05:35:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2184 DF PROTO=TCP SPT=55822 DPT=9882 SEQ=2054488445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76141D390000000001030307) Oct 14 05:35:30 localhost python3.9[196447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:35:31 localhost podman[196560]: 2025-10-14 09:35:31.336880213 +0000 UTC m=+0.078635867 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:35:31 localhost podman[196560]: 2025-10-14 09:35:31.348117655 +0000 UTC m=+0.089873279 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:35:31 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:35:31 localhost python3.9[196561]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:32 localhost python3.9[196691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58929 DF PROTO=TCP SPT=37910 DPT=9100 SEQ=3083652655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761428A90000000001030307) Oct 14 05:35:34 localhost python3.9[196804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:36 localhost python3.9[196918]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:36 localhost python3.9[197031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58930 DF PROTO=TCP SPT=37910 DPT=9100 SEQ=3083652655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761438690000000001030307) Oct 14 05:35:37 localhost python3.9[197144]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:39 localhost python3.9[197257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 14 05:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47029 DF PROTO=TCP SPT=55950 DPT=9105 SEQ=4105342462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761443AA0000000001030307) Oct 14 05:35:41 localhost python3.9[197370]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:42 localhost python3.9[197480]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:43 localhost python3.9[197590]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:43 localhost python3.9[197700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31152 DF PROTO=TCP SPT=49016 DPT=9101 SEQ=2021395104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761452730000000001030307) Oct 14 05:35:44 localhost python3.9[197810]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:45 localhost python3.9[197920]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:35:46 localhost python3.9[198030]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:47 localhost python3.9[198120]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434545.8099942-1643-226273031960306/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31154 DF PROTO=TCP SPT=49016 DPT=9101 SEQ=2021395104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76145E690000000001030307) Oct 14 05:35:47 localhost python3.9[198230]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:48 localhost python3.9[198320]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434547.3424122-1643-44294423625725/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:49 localhost python3.9[198430]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:49 localhost python3.9[198520]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434548.5819395-1643-173657005474547/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:50 localhost python3.9[198630]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:50 localhost python3.9[198720]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434549.7529337-1643-242276863500170/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31155 DF PROTO=TCP SPT=49016 DPT=9101 SEQ=2021395104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76146E290000000001030307) Oct 14 05:35:51 localhost python3.9[198830]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:52 localhost python3.9[198920]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434550.930906-1643-158188845921053/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:53 localhost python3.9[199030]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29609 DF PROTO=TCP SPT=41890 DPT=9102 SEQ=3261218134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761477DE0000000001030307) Oct 14 05:35:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29610 DF PROTO=TCP SPT=41890 DPT=9102 SEQ=3261218134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76147BE90000000001030307) Oct 14 05:35:54 localhost python3.9[199120]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434552.9006512-1643-80161680386771/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:55 localhost python3.9[199230]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:56 localhost python3.9[199318]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434555.1995575-1643-120186488626378/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:56 localhost python3.9[199428]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:35:57 localhost python3.9[199518]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760434556.3915005-1643-149662262964165/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:35:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:35:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:35:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:35:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:35:57.590 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:35:58 localhost python3.9[199628]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:35:58 localhost podman[199662]: 2025-10-14 09:35:58.543885833 +0000 UTC m=+0.082818478 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 05:35:58 localhost podman[199662]: 2025-10-14 09:35:58.621046844 +0000 UTC m=+0.159979479 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:35:58 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:35:58 localhost python3.9[199763]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:35:59 localhost python3.9[199873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:00 localhost python3.9[199983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17024 DF PROTO=TCP SPT=58002 DPT=9100 SEQ=689113787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761491C40000000001030307) Oct 14 05:36:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62978 DF PROTO=TCP SPT=46910 DPT=9882 SEQ=3258719042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761492690000000001030307) Oct 14 05:36:00 localhost python3.9[200093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:36:01 localhost podman[200247]: 2025-10-14 09:36:01.523788272 +0000 UTC m=+0.085139644 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:36:01 localhost podman[200247]: 2025-10-14 09:36:01.52865277 +0000 UTC m=+0.090004142 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 05:36:01 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:36:01 localhost python3.9[200246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:02 localhost python3.9[200437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:03 localhost python3.9[200564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17026 DF PROTO=TCP SPT=58002 DPT=9100 SEQ=689113787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76149DE90000000001030307) Oct 14 05:36:03 localhost python3.9[200692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:04 localhost python3.9[200802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:05 localhost python3.9[200912]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:06 localhost python3.9[201022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:07 localhost python3.9[201132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17027 DF PROTO=TCP SPT=58002 DPT=9100 SEQ=689113787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614ADA90000000001030307) Oct 14 05:36:07 localhost python3.9[201242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:09 localhost python3.9[201352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:09 localhost python3.9[201462]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40378 DF PROTO=TCP SPT=50846 DPT=9105 SEQ=1621939837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614B8E90000000001030307) Oct 14 05:36:10 localhost python3.9[201550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434569.4519696-2306-208209021093847/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:11 localhost python3.9[201660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:11 localhost python3.9[201748]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434570.7042198-2306-236816501872140/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:12 localhost python3.9[201858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:13 localhost python3.9[201946]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434571.9290278-2306-274242130571679/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:13 localhost python3.9[202056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23204 DF PROTO=TCP SPT=44432 DPT=9101 SEQ=3787483623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614C7A40000000001030307) Oct 14 05:36:14 localhost python3.9[202144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434573.1708806-2306-210247172536063/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:15 localhost python3.9[202254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:15 localhost python3.9[202342]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434574.58943-2306-280224910295189/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:16 localhost python3.9[202452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:16 localhost python3.9[202540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434575.7571955-2306-104700083375228/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23206 DF PROTO=TCP SPT=44432 DPT=9101 SEQ=3787483623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614D3AA0000000001030307) Oct 14 05:36:17 localhost python3.9[202650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:18 localhost python3.9[202738]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434577.0339231-2306-225649247727444/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:18 localhost python3.9[202848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:19 localhost python3.9[202936]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434578.2268775-2306-89391655129063/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:19 localhost python3.9[203046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:20 localhost python3.9[203134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434579.4624238-2306-172532127083256/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23207 DF PROTO=TCP SPT=44432 DPT=9101 SEQ=3787483623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614E3690000000001030307) Oct 14 05:36:21 localhost python3.9[203244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:22 localhost python3.9[203332]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434581.1165724-2306-216348327522159/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:23 localhost python3.9[203442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19516 DF PROTO=TCP SPT=53264 DPT=9102 SEQ=2096160851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614ED0E0000000001030307) Oct 14 05:36:23 localhost python3.9[203530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434582.9635801-2306-51039052322094/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:24 localhost python3.9[203640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19517 DF PROTO=TCP SPT=53264 DPT=9102 SEQ=2096160851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7614F12A0000000001030307) Oct 14 05:36:25 localhost python3.9[203728]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434584.1766918-2306-186948630807209/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:25 localhost python3.9[203838]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:26 localhost python3.9[203926]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434585.4050617-2306-59558197044914/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:27 localhost python3.9[204036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:27 localhost python3.9[204124]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434586.5707161-2306-171910619401705/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:28 localhost python3.9[204232]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:36:29 localhost podman[204331]: 2025-10-14 09:36:29.08879232 +0000 UTC m=+0.097177593 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:36:29 localhost podman[204331]: 2025-10-14 09:36:29.133105821 +0000 UTC m=+0.141491124 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 05:36:29 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:36:29 localhost python3.9[204357]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 14 05:36:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4532 DF PROTO=TCP SPT=50984 DPT=9100 SEQ=3906822087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761506F40000000001030307) Oct 14 05:36:30 localhost python3.9[204481]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:36:30 localhost systemd[1]: Reloading. Oct 14 05:36:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51678 DF PROTO=TCP SPT=45568 DPT=9882 SEQ=3366089648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615079A0000000001030307) Oct 14 05:36:30 localhost systemd-rc-local-generator[204505]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:36:30 localhost systemd-sysv-generator[204512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:36:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:36:30 localhost systemd[1]: Starting libvirt logging daemon socket... Oct 14 05:36:30 localhost systemd[1]: Listening on libvirt logging daemon socket. Oct 14 05:36:30 localhost systemd[1]: Starting libvirt logging daemon admin socket... Oct 14 05:36:30 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Oct 14 05:36:30 localhost systemd[1]: Starting libvirt logging daemon... Oct 14 05:36:30 localhost systemd[1]: Started libvirt logging daemon. Oct 14 05:36:31 localhost python3.9[204633]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:36:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:36:31 localhost systemd[1]: Reloading. Oct 14 05:36:31 localhost systemd-rc-local-generator[204669]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:36:31 localhost systemd-sysv-generator[204675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:36:31 localhost podman[204635]: 2025-10-14 09:36:31.893969434 +0000 UTC m=+0.109943320 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3) Oct 14 05:36:31 localhost podman[204635]: 2025-10-14 09:36:31.923993813 +0000 UTC m=+0.139967699 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:36:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:36:32 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:36:32 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Oct 14 05:36:32 localhost systemd[1]: Starting libvirt nodedev daemon socket... Oct 14 05:36:32 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Oct 14 05:36:32 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Oct 14 05:36:32 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Oct 14 05:36:32 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Oct 14 05:36:32 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Oct 14 05:36:32 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 14 05:36:32 localhost systemd[1]: Started libvirt nodedev daemon. Oct 14 05:36:32 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Oct 14 05:36:32 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Oct 14 05:36:32 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Oct 14 05:36:32 localhost python3.9[204831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:36:32 localhost systemd[1]: Reloading. Oct 14 05:36:33 localhost systemd-sysv-generator[204862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:36:33 localhost systemd-rc-local-generator[204857]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:36:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:36:33 localhost systemd[1]: Starting libvirt proxy daemon socket... Oct 14 05:36:33 localhost systemd[1]: Listening on libvirt proxy daemon socket. Oct 14 05:36:33 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Oct 14 05:36:33 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Oct 14 05:36:33 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Oct 14 05:36:33 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Oct 14 05:36:33 localhost systemd[1]: Starting libvirt proxy daemon... Oct 14 05:36:33 localhost systemd[1]: Started libvirt proxy daemon. Oct 14 05:36:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4534 DF PROTO=TCP SPT=50984 DPT=9100 SEQ=3906822087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761512E90000000001030307) Oct 14 05:36:33 localhost setroubleshoot[204687]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a2b72235-e965-4511-a7cc-89b604b9221a Oct 14 05:36:33 localhost setroubleshoot[204687]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 14 05:36:33 localhost setroubleshoot[204687]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a2b72235-e965-4511-a7cc-89b604b9221a Oct 14 05:36:33 localhost setroubleshoot[204687]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 14 05:36:34 localhost python3.9[205002]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:36:34 localhost systemd[1]: Reloading. Oct 14 05:36:34 localhost systemd-rc-local-generator[205031]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:36:34 localhost systemd-sysv-generator[205035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:36:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:36:35 localhost systemd[1]: Listening on libvirt locking daemon socket. Oct 14 05:36:35 localhost systemd[1]: Starting libvirt QEMU daemon socket... Oct 14 05:36:35 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 14 05:36:35 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Oct 14 05:36:35 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Oct 14 05:36:35 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Oct 14 05:36:35 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Oct 14 05:36:35 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Oct 14 05:36:35 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Oct 14 05:36:35 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Oct 14 05:36:35 localhost systemd[1]: Starting libvirt QEMU daemon... Oct 14 05:36:35 localhost systemd[1]: Started libvirt QEMU daemon. Oct 14 05:36:35 localhost python3.9[205175]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:36:35 localhost systemd[1]: Reloading. Oct 14 05:36:36 localhost systemd-sysv-generator[205204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:36:36 localhost systemd-rc-local-generator[205199]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:36:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:36:36 localhost systemd[1]: Starting libvirt secret daemon socket... Oct 14 05:36:36 localhost systemd[1]: Listening on libvirt secret daemon socket. Oct 14 05:36:36 localhost systemd[1]: Starting libvirt secret daemon admin socket... Oct 14 05:36:36 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Oct 14 05:36:36 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Oct 14 05:36:36 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Oct 14 05:36:36 localhost systemd[1]: Starting libvirt secret daemon... Oct 14 05:36:36 localhost systemd[1]: Started libvirt secret daemon. Oct 14 05:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4535 DF PROTO=TCP SPT=50984 DPT=9100 SEQ=3906822087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761522A90000000001030307) Oct 14 05:36:38 localhost python3.9[205344]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:39 localhost python3.9[205454]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:36:39 localhost python3.9[205564]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:36:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9137 DF PROTO=TCP SPT=35754 DPT=9105 SEQ=816947267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76152DE90000000001030307) Oct 14 05:36:40 localhost python3.9[205676]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:36:41 localhost python3.9[205784]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:42 localhost python3.9[205870]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434601.1618972-3170-156742138800920/.source.xml follow=False _original_basename=secret.xml.j2 checksum=a98993dd7f9443820dd0c69ee661382763176cb0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:42 localhost python3.9[205980]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine fcadf6e2-9176-5818-a8d0-37b19acf8eaf#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:36:43 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Oct 14 05:36:43 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Oct 14 05:36:43 localhost python3.9[206100]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50392 DF PROTO=TCP SPT=59092 DPT=9101 SEQ=2470559597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76153CD30000000001030307) Oct 14 05:36:46 localhost python3.9[206438]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:46 localhost python3.9[206548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:47 localhost python3.9[206636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434606.2086341-3335-193972677793016/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50394 DF PROTO=TCP SPT=59092 DPT=9101 SEQ=2470559597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761548E90000000001030307) Oct 14 05:36:49 localhost python3.9[206746]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:49 localhost python3.9[206856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:50 localhost python3.9[206913]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50395 DF PROTO=TCP SPT=59092 DPT=9101 SEQ=2470559597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761558A90000000001030307) Oct 14 05:36:51 localhost python3.9[207023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:52 localhost python3.9[207080]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4w6ha67j recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:53 localhost python3.9[207190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:53 localhost python3.9[207247]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55113 DF PROTO=TCP SPT=51336 DPT=9102 SEQ=1268885821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615623E0000000001030307) Oct 14 05:36:54 localhost python3.9[207357]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:36:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55114 DF PROTO=TCP SPT=51336 DPT=9102 SEQ=1268885821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761566290000000001030307) Oct 14 05:36:55 localhost python3[207468]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 14 05:36:56 localhost python3.9[207578]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:56 localhost python3.9[207635]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:57 localhost python3.9[207745]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:36:57.591 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:36:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:36:57.592 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:36:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:36:57.592 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:36:57 localhost python3.9[207802]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:58 localhost python3.9[207912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:36:59 localhost python3.9[207969]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:36:59 localhost podman[208032]: 2025-10-14 09:36:59.547969506 +0000 UTC m=+0.084135139 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:36:59 localhost podman[208032]: 2025-10-14 09:36:59.58719873 +0000 UTC m=+0.123364343 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 05:36:59 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:36:59 localhost python3.9[208104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:00 localhost python3.9[208161]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8896 DF PROTO=TCP SPT=57752 DPT=9100 SEQ=2111582735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76157C240000000001030307) Oct 14 05:37:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4692 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=2027659498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76157CC90000000001030307) Oct 14 05:37:01 localhost python3.9[208271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:01 localhost python3.9[208361]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760434620.6131318-3710-121428808295316/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:37:02 localhost podman[208472]: 2025-10-14 09:37:02.423813856 +0000 UTC m=+0.080964096 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 05:37:02 localhost podman[208472]: 2025-10-14 09:37:02.429110599 +0000 UTC m=+0.086260809 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:37:02 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:37:02 localhost python3.9[208471]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8898 DF PROTO=TCP SPT=57752 DPT=9100 SEQ=2111582735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761588290000000001030307) Oct 14 05:37:04 localhost python3.9[208667]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:37:05 localhost python3.9[208798]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:06 localhost python3.9[208908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:37:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8899 DF PROTO=TCP SPT=57752 DPT=9100 SEQ=2111582735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761597E90000000001030307) Oct 14 05:37:07 localhost python3.9[209019]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:37:08 localhost python3.9[209131]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:37:09 localhost python3.9[209244]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:10 localhost python3.9[209354]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=248 DF PROTO=TCP SPT=60784 DPT=9105 SEQ=2631396188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615A3290000000001030307) Oct 14 05:37:10 localhost python3.9[209442]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434629.4999018-3926-239970697082818/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:11 localhost python3.9[209552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:11 localhost python3.9[209640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434630.7532325-3971-6012091483742/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:12 localhost python3.9[209750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:13 localhost python3.9[209838]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434632.0507622-4016-197898002543914/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:13 localhost python3.9[209948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:37:13 localhost systemd[1]: Reloading. Oct 14 05:37:14 localhost systemd-sysv-generator[209972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:14 localhost systemd-rc-local-generator[209969]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2367 DF PROTO=TCP SPT=46416 DPT=9101 SEQ=3591809344 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615B2030000000001030307) Oct 14 05:37:14 localhost systemd[1]: Reached target edpm_libvirt.target. Oct 14 05:37:15 localhost python3.9[210098]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 14 05:37:15 localhost systemd[1]: Reloading. Oct 14 05:37:15 localhost systemd-rc-local-generator[210121]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:15 localhost systemd-sysv-generator[210128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:15 localhost systemd[1]: Reloading. Oct 14 05:37:15 localhost systemd-sysv-generator[210167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:15 localhost systemd-rc-local-generator[210161]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:16 localhost systemd[1]: session-53.scope: Deactivated successfully. Oct 14 05:37:16 localhost systemd[1]: session-53.scope: Consumed 3min 42.154s CPU time. Oct 14 05:37:16 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Oct 14 05:37:16 localhost systemd-logind[760]: Removed session 53. Oct 14 05:37:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2369 DF PROTO=TCP SPT=46416 DPT=9101 SEQ=3591809344 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615BE290000000001030307) Oct 14 05:37:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2370 DF PROTO=TCP SPT=46416 DPT=9101 SEQ=3591809344 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615CDE90000000001030307) Oct 14 05:37:23 localhost sshd[210190]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33815 DF PROTO=TCP SPT=38482 DPT=9102 SEQ=1526829763 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615D76E0000000001030307) Oct 14 05:37:23 localhost systemd-logind[760]: New session 54 of user zuul. Oct 14 05:37:23 localhost systemd[1]: Started Session 54 of User zuul. Oct 14 05:37:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33816 DF PROTO=TCP SPT=38482 DPT=9102 SEQ=1526829763 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615DB690000000001030307) Oct 14 05:37:24 localhost python3.9[210301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:37:26 localhost python3.9[210415]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:27 localhost python3.9[210525]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:27 localhost python3.9[210635]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:28 localhost python3.9[210745]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:37:28 localhost python3.9[210855]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:37:29 localhost systemd[1]: tmp-crun.oVTIZk.mount: Deactivated successfully. Oct 14 05:37:29 localhost podman[210966]: 2025-10-14 09:37:29.730683569 +0000 UTC m=+0.096652631 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:37:29 localhost podman[210966]: 2025-10-14 09:37:29.79326336 +0000 UTC m=+0.159232502 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 05:37:29 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:37:29 localhost python3.9[210965]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:37:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46962 DF PROTO=TCP SPT=50656 DPT=9100 SEQ=3877565256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615F1540000000001030307) Oct 14 05:37:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46395 DF PROTO=TCP SPT=46060 DPT=9882 SEQ=2322667081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615F1F90000000001030307) Oct 14 05:37:30 localhost python3.9[211102]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:37:30 localhost systemd[1]: Reloading. Oct 14 05:37:31 localhost systemd-rc-local-generator[211126]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:31 localhost systemd-sysv-generator[211132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:32 localhost python3.9[211250]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:37:32 localhost network[211267]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:37:32 localhost network[211268]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:37:32 localhost network[211269]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:37:33 localhost podman[211276]: 2025-10-14 09:37:33.047349489 +0000 UTC m=+0.096778134 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:37:33 localhost podman[211276]: 2025-10-14 09:37:33.07783816 +0000 UTC m=+0.127266785 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:37:33 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:37:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46964 DF PROTO=TCP SPT=50656 DPT=9100 SEQ=3877565256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7615FD690000000001030307) Oct 14 05:37:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46965 DF PROTO=TCP SPT=50656 DPT=9100 SEQ=3877565256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76160D290000000001030307) Oct 14 05:37:37 localhost python3.9[211520]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:37:37 localhost systemd[1]: Reloading. Oct 14 05:37:37 localhost systemd-rc-local-generator[211545]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:37 localhost systemd-sysv-generator[211551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:39 localhost python3.9[211666]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:37:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63253 DF PROTO=TCP SPT=53192 DPT=9105 SEQ=2001637827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761618690000000001030307) Oct 14 05:37:40 localhost python3.9[211776]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:37:41 localhost python3.9[211888]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:42 localhost python3.9[211998]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:43 localhost python3.9[212108]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:43 localhost python3.9[212165]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18402 DF PROTO=TCP SPT=57436 DPT=9101 SEQ=3914135486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761627330000000001030307) Oct 14 05:37:44 localhost python3.9[212275]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:44 localhost python3.9[212332]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:45 localhost python3.9[212442]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:46 localhost python3.9[212552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:46 localhost python3.9[212609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18404 DF PROTO=TCP SPT=57436 DPT=9101 SEQ=3914135486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761633290000000001030307) Oct 14 05:37:47 localhost python3.9[212719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:48 localhost python3.9[212776]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:49 localhost python3.9[212886]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:37:49 localhost systemd[1]: Reloading. Oct 14 05:37:49 localhost systemd-rc-local-generator[212911]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:49 localhost systemd-sysv-generator[212916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:50 localhost python3.9[213034]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:51 localhost python3.9[213091]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18405 DF PROTO=TCP SPT=57436 DPT=9101 SEQ=3914135486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761642EA0000000001030307) Oct 14 05:37:51 localhost python3.9[213201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:52 localhost python3.9[213258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:53 localhost python3.9[213368]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:37:53 localhost systemd[1]: Reloading. Oct 14 05:37:53 localhost systemd-rc-local-generator[213393]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:37:53 localhost systemd-sysv-generator[213398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:37:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:37:53 localhost systemd[1]: Starting Create netns directory... Oct 14 05:37:53 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:37:53 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:37:53 localhost systemd[1]: Finished Create netns directory. Oct 14 05:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63496 DF PROTO=TCP SPT=58108 DPT=9102 SEQ=3581993470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76164C9E0000000001030307) Oct 14 05:37:54 localhost python3.9[213520]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63497 DF PROTO=TCP SPT=58108 DPT=9102 SEQ=3581993470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761650A90000000001030307) Oct 14 05:37:55 localhost python3.9[213630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:56 localhost python3.9[213718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434674.9873908-695-47225792893810/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:57 localhost python3.9[213828]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:37:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:37:57.593 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:37:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:37:57.594 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:37:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:37:57.594 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:37:57 localhost python3.9[213938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:37:58 localhost python3.9[214028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434677.441875-770-95090426188360/.source.json _original_basename=.xlfrdsl9 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:59 localhost python3.9[214138]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:38:00 localhost podman[214249]: 2025-10-14 09:38:00.080606067 +0000 UTC m=+0.085790354 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:38:00 localhost podman[214249]: 2025-10-14 09:38:00.187074288 +0000 UTC m=+0.192258515 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 05:38:00 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:38:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27932 DF PROTO=TCP SPT=47870 DPT=9100 SEQ=1288730431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761666840000000001030307) Oct 14 05:38:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51981 DF PROTO=TCP SPT=47970 DPT=9882 SEQ=948481652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616672A0000000001030307) Oct 14 05:38:02 localhost python3.9[214471]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 14 05:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:38:03 localhost podman[214527]: 2025-10-14 09:38:03.53267333 +0000 UTC m=+0.076590457 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 05:38:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27934 DF PROTO=TCP SPT=47870 DPT=9100 SEQ=1288730431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761672A90000000001030307) Oct 14 05:38:03 localhost podman[214527]: 2025-10-14 09:38:03.562528135 +0000 UTC m=+0.106445292 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:38:03 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:38:04 localhost python3.9[214599]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:38:05 localhost podman[214813]: 2025-10-14 09:38:05.757921924 +0000 UTC m=+0.105051017 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Oct 14 05:38:05 localhost podman[214813]: 2025-10-14 09:38:05.853071557 +0000 UTC m=+0.200200640 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , vcs-type=git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Oct 14 05:38:05 localhost python3.9[214825]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:38:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27935 DF PROTO=TCP SPT=47870 DPT=9100 SEQ=1288730431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616826A0000000001030307) Oct 14 05:38:10 localhost python3[215101]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:38:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45944 DF PROTO=TCP SPT=53564 DPT=9105 SEQ=1793694037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76168DA90000000001030307) Oct 14 05:38:12 localhost podman[215114]: 2025-10-14 09:38:10.50069772 +0000 UTC m=+0.048548757 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 14 05:38:12 localhost python3[215101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "4f44a4f5e0315c0d3dbd533e21d0927bf0518cf452942382901ff1ff9d621cbd",#012 "Digest": "sha256:2975c6e807fa09f0e2062da08d3a0bb209ca055d73011ebb91164def554f60aa",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:2975c6e807fa09f0e2062da08d3a0bb209ca055d73011ebb91164def554f60aa"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-14T06:14:08.154480843Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403858061,#012 "VirtualSize": 403858061,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec/diff:/var/lib/containers/storage/overlay/0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2896905ce9321c1f2feb1f3ada413e86eda3444455358ab965478a041351b392",#012 "sha256:f640179b0564dc7abbe22bd39fc8810d5bbb8e54094fe7ebc5b3c45b658c4983",#012 "sha256:f004953af60f7a99c360488169b0781a154164be09dce508bd68d57932c60f8f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-14T06:08:54.969219151Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969253522Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969285133Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969308103Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969342284Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969363945Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:55.340499198Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:32.389605838Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:35.587912811Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 14 05:38:12 localhost podman[215175]: 2025-10-14 09:38:12.738252044 +0000 UTC m=+0.084102035 container remove df52bf9d9ff25c864e574cebf53a8501bcd7efa13f95683c6777b1c5359e2d3a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, architecture=x86_64, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Oct 14 05:38:12 localhost python3[215101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force iscsid Oct 14 05:38:12 localhost podman[215189]: Oct 14 05:38:12 localhost podman[215189]: 2025-10-14 09:38:12.845602054 +0000 UTC m=+0.088649976 container create 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible) Oct 14 05:38:12 localhost podman[215189]: 2025-10-14 09:38:12.802774121 +0000 UTC m=+0.045822073 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 14 05:38:12 localhost python3[215101]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 14 05:38:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50089 DF PROTO=TCP SPT=49358 DPT=9101 SEQ=4271246345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76169C630000000001030307) Oct 14 05:38:14 localhost python3.9[215337]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:15 localhost python3.9[215449]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:16 localhost python3.9[215504]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:16 localhost python3.9[215613]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434696.1315436-1034-245885626771103/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50091 DF PROTO=TCP SPT=49358 DPT=9101 SEQ=4271246345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616A8690000000001030307) Oct 14 05:38:17 localhost python3.9[215668]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:38:17 localhost systemd[1]: Reloading. Oct 14 05:38:18 localhost systemd-sysv-generator[215694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:38:18 localhost systemd-rc-local-generator[215689]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:38:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:38:18 localhost python3.9[215759]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:38:18 localhost systemd[1]: Reloading. Oct 14 05:38:19 localhost systemd-rc-local-generator[215783]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:38:19 localhost systemd-sysv-generator[215789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:38:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:38:19 localhost systemd[1]: Starting iscsid container... Oct 14 05:38:19 localhost systemd[1]: Started libcrun container. Oct 14 05:38:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:38:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 14 05:38:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:38:19 localhost podman[215800]: 2025-10-14 09:38:19.411878365 +0000 UTC m=+0.147396199 container init 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 05:38:19 localhost iscsid[215814]: + sudo -E kolla_set_configs Oct 14 05:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:38:19 localhost podman[215800]: 2025-10-14 09:38:19.449344437 +0000 UTC m=+0.184862261 container start 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:38:19 localhost podman[215800]: iscsid Oct 14 05:38:19 localhost systemd[1]: Started iscsid container. Oct 14 05:38:19 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 05:38:19 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 05:38:19 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 05:38:19 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 05:38:19 localhost podman[215822]: 2025-10-14 09:38:19.559140601 +0000 UTC m=+0.103051737 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:38:19 localhost podman[215822]: 2025-10-14 09:38:19.571037795 +0000 UTC m=+0.114948931 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid) Oct 14 05:38:19 localhost podman[215822]: unhealthy Oct 14 05:38:19 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:38:19 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Failed with result 'exit-code'. Oct 14 05:38:19 localhost systemd[215834]: Queued start job for default target Main User Target. Oct 14 05:38:19 localhost systemd[215834]: Created slice User Application Slice. Oct 14 05:38:19 localhost systemd[215834]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 05:38:19 localhost systemd[215834]: Started Daily Cleanup of User's Temporary Directories. Oct 14 05:38:19 localhost systemd[215834]: Reached target Paths. Oct 14 05:38:19 localhost systemd[215834]: Reached target Timers. Oct 14 05:38:19 localhost systemd[215834]: Starting D-Bus User Message Bus Socket... Oct 14 05:38:19 localhost systemd[215834]: Starting Create User's Volatile Files and Directories... Oct 14 05:38:19 localhost systemd[215834]: Listening on D-Bus User Message Bus Socket. Oct 14 05:38:19 localhost systemd[215834]: Reached target Sockets. Oct 14 05:38:19 localhost systemd[215834]: Finished Create User's Volatile Files and Directories. Oct 14 05:38:19 localhost systemd[215834]: Reached target Basic System. Oct 14 05:38:19 localhost systemd[215834]: Reached target Main User Target. Oct 14 05:38:19 localhost systemd[215834]: Startup finished in 127ms. Oct 14 05:38:19 localhost systemd[1]: Started User Manager for UID 0. Oct 14 05:38:19 localhost systemd[1]: Started Session c13 of User root. Oct 14 05:38:19 localhost iscsid[215814]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:38:19 localhost iscsid[215814]: INFO:__main__:Validating config file Oct 14 05:38:19 localhost iscsid[215814]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:38:19 localhost iscsid[215814]: INFO:__main__:Writing out command to execute Oct 14 05:38:19 localhost systemd[1]: session-c13.scope: Deactivated successfully. Oct 14 05:38:19 localhost iscsid[215814]: ++ cat /run_command Oct 14 05:38:19 localhost iscsid[215814]: + CMD='/usr/sbin/iscsid -f' Oct 14 05:38:19 localhost iscsid[215814]: + ARGS= Oct 14 05:38:19 localhost iscsid[215814]: + sudo kolla_copy_cacerts Oct 14 05:38:19 localhost systemd[1]: Started Session c14 of User root. Oct 14 05:38:19 localhost iscsid[215814]: + [[ ! -n '' ]] Oct 14 05:38:19 localhost systemd[1]: session-c14.scope: Deactivated successfully. Oct 14 05:38:19 localhost iscsid[215814]: + . kolla_extend_start Oct 14 05:38:19 localhost iscsid[215814]: Running command: '/usr/sbin/iscsid -f' Oct 14 05:38:19 localhost iscsid[215814]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 14 05:38:19 localhost iscsid[215814]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 14 05:38:19 localhost iscsid[215814]: + umask 0022 Oct 14 05:38:19 localhost iscsid[215814]: + exec /usr/sbin/iscsid -f Oct 14 05:38:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50092 DF PROTO=TCP SPT=49358 DPT=9101 SEQ=4271246345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616B8290000000001030307) Oct 14 05:38:21 localhost python3.9[215967]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:22 localhost python3.9[216077]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:23 localhost python3.9[216187]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:38:23 localhost network[216204]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:38:23 localhost network[216205]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:38:23 localhost network[216206]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:38:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40584 DF PROTO=TCP SPT=59252 DPT=9102 SEQ=3354196433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616C1CE0000000001030307) Oct 14 05:38:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40585 DF PROTO=TCP SPT=59252 DPT=9102 SEQ=3354196433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616C5E90000000001030307) Oct 14 05:38:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:38:28 localhost python3.9[216439]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:38:29 localhost python3.9[216549]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 14 05:38:29 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 05:38:29 localhost systemd[215834]: Activating special unit Exit the Session... Oct 14 05:38:29 localhost systemd[215834]: Stopped target Main User Target. Oct 14 05:38:29 localhost systemd[215834]: Stopped target Basic System. Oct 14 05:38:29 localhost systemd[215834]: Stopped target Paths. Oct 14 05:38:29 localhost systemd[215834]: Stopped target Sockets. Oct 14 05:38:29 localhost systemd[215834]: Stopped target Timers. Oct 14 05:38:29 localhost systemd[215834]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 05:38:29 localhost systemd[215834]: Closed D-Bus User Message Bus Socket. Oct 14 05:38:29 localhost systemd[215834]: Stopped Create User's Volatile Files and Directories. Oct 14 05:38:29 localhost systemd[215834]: Removed slice User Application Slice. Oct 14 05:38:29 localhost systemd[215834]: Reached target Shutdown. Oct 14 05:38:29 localhost systemd[215834]: Finished Exit the Session. Oct 14 05:38:29 localhost systemd[215834]: Reached target Exit the Session. Oct 14 05:38:29 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 05:38:29 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 05:38:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 05:38:29 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 05:38:29 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 05:38:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 05:38:29 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 05:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:38:30 localhost systemd[1]: tmp-crun.icIBMJ.mount: Deactivated successfully. Oct 14 05:38:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51161 DF PROTO=TCP SPT=60930 DPT=9100 SEQ=1778833430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616DBB30000000001030307) Oct 14 05:38:30 localhost podman[216666]: 2025-10-14 09:38:30.429402725 +0000 UTC m=+0.095726282 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:38:30 localhost podman[216666]: 2025-10-14 09:38:30.463346593 +0000 UTC m=+0.129670100 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:38:30 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:38:30 localhost python3.9[216665]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46790 DF PROTO=TCP SPT=46466 DPT=9882 SEQ=3000140409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616DC5C0000000001030307) Oct 14 05:38:31 localhost python3.9[216778]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434710.0270352-1256-14992354928399/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:32 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Oct 14 05:38:33 localhost python3.9[216889]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:33 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Oct 14 05:38:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51163 DF PROTO=TCP SPT=60930 DPT=9100 SEQ=1778833430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616E7AA0000000001030307) Oct 14 05:38:33 localhost python3.9[217000]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:38:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:38:33 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 14 05:38:33 localhost systemd[1]: Stopped Load Kernel Modules. Oct 14 05:38:33 localhost systemd[1]: Stopping Load Kernel Modules... Oct 14 05:38:33 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 05:38:33 localhost systemd-modules-load[217015]: Module 'msr' is built in Oct 14 05:38:33 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 05:38:33 localhost podman[217002]: 2025-10-14 09:38:33.941837332 +0000 UTC m=+0.083474090 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:38:33 localhost podman[217002]: 2025-10-14 09:38:33.95460147 +0000 UTC m=+0.096238258 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:38:33 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:38:34 localhost python3.9[217131]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:38:35 localhost python3.9[217241]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:36 localhost python3.9[217351]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:36 localhost python3.9[217461]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51164 DF PROTO=TCP SPT=60930 DPT=9100 SEQ=1778833430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7616F7690000000001030307) Oct 14 05:38:37 localhost python3.9[217549]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434716.5066824-1430-167433440583794/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:38 localhost python3.9[217659]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:38:39 localhost python3.9[217770]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:40 localhost python3.9[217880]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17020 DF PROTO=TCP SPT=37740 DPT=9105 SEQ=12088999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761702A90000000001030307) Oct 14 05:38:40 localhost python3.9[217990]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:41 localhost python3.9[218100]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:42 localhost python3.9[218210]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:42 localhost python3.9[218320]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:43 localhost python3.9[218430]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23388 DF PROTO=TCP SPT=44870 DPT=9101 SEQ=2390593927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761711930000000001030307) Oct 14 05:38:44 localhost python3.9[218540]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:38:45 localhost python3.9[218652]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:45 localhost systemd[1]: virtqemud.service: Deactivated successfully. Oct 14 05:38:45 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 14 05:38:45 localhost python3.9[218765]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:38:46 localhost python3.9[218875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:47 localhost python3.9[218932]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:38:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23390 DF PROTO=TCP SPT=44870 DPT=9101 SEQ=2390593927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76171DA90000000001030307) Oct 14 05:38:47 localhost python3.9[219042]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:48 localhost python3.9[219099]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:38:48 localhost python3.9[219209]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:49 localhost python3.9[219319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:38:50 localhost podman[219377]: 2025-10-14 09:38:50.16637593 +0000 UTC m=+0.089386275 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid) Oct 14 05:38:50 localhost podman[219377]: 2025-10-14 09:38:50.199136497 +0000 UTC m=+0.122146812 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:38:50 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:38:50 localhost python3.9[219376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23391 DF PROTO=TCP SPT=44870 DPT=9101 SEQ=2390593927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76172D690000000001030307) Oct 14 05:38:51 localhost python3.9[219503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:52 localhost python3.9[219560]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:53 localhost python3.9[219670]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:38:53 localhost systemd[1]: Reloading. Oct 14 05:38:53 localhost systemd-rc-local-generator[219698]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:38:53 localhost systemd-sysv-generator[219702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:38:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:38:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14089 DF PROTO=TCP SPT=44824 DPT=9102 SEQ=1348804918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761736FE0000000001030307) Oct 14 05:38:54 localhost python3.9[219819]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14090 DF PROTO=TCP SPT=44824 DPT=9102 SEQ=1348804918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76173AE90000000001030307) Oct 14 05:38:55 localhost python3.9[219876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:55 localhost python3.9[219986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:38:56 localhost python3.9[220043]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:38:57 localhost python3.9[220153]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:38:57 localhost systemd[1]: Reloading. Oct 14 05:38:57 localhost systemd-rc-local-generator[220178]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:38:57 localhost systemd-sysv-generator[220183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:38:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:38:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:38:57.594 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:38:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:38:57.596 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:38:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:38:57.596 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:38:58 localhost systemd[1]: Starting Create netns directory... Oct 14 05:38:58 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:38:58 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:38:58 localhost systemd[1]: Finished Create netns directory. Oct 14 05:38:59 localhost python3.9[220305]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:39:00 localhost python3.9[220415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:39:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=478 DF PROTO=TCP SPT=43606 DPT=9100 SEQ=473710535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761750F00000000001030307) Oct 14 05:39:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44519 DF PROTO=TCP SPT=43100 DPT=9882 SEQ=193220881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761751890000000001030307) Oct 14 05:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:39:00 localhost podman[220504]: 2025-10-14 09:39:00.751382159 +0000 UTC m=+0.096895963 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller) Oct 14 05:39:00 localhost podman[220504]: 2025-10-14 09:39:00.789033946 +0000 UTC m=+0.134547720 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 05:39:00 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:39:00 localhost python3.9[220503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434739.7487996-2051-173951482355342/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:39:01 localhost python3.9[220636]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:39:02 localhost python3.9[220746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:39:03 localhost python3.9[220834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434742.1699166-2126-131462216457925/.source.json _original_basename=.wemfa5ab follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=480 DF PROTO=TCP SPT=43606 DPT=9100 SEQ=473710535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76175CE90000000001030307) Oct 14 05:39:04 localhost python3.9[220944]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:39:04 localhost systemd[1]: tmp-crun.4LfFxw.mount: Deactivated successfully. Oct 14 05:39:04 localhost podman[221032]: 2025-10-14 09:39:04.554211347 +0000 UTC m=+0.082537235 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 05:39:04 localhost podman[221032]: 2025-10-14 09:39:04.563077911 +0000 UTC m=+0.091403839 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 05:39:04 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:39:06 localhost python3.9[221270]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 14 05:39:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=481 DF PROTO=TCP SPT=43606 DPT=9100 SEQ=473710535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76176CA90000000001030307) Oct 14 05:39:08 localhost python3.9[221436]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:39:09 localhost python3.9[221576]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:39:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37744 DF PROTO=TCP SPT=37656 DPT=9105 SEQ=2250072067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761777E90000000001030307) Oct 14 05:39:13 localhost python3[221713]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:39:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29418 DF PROTO=TCP SPT=50278 DPT=9101 SEQ=71776917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761786C30000000001030307) Oct 14 05:39:15 localhost podman[221726]: 2025-10-14 09:39:13.414713623 +0000 UTC m=+0.045408283 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 14 05:39:15 localhost podman[221772]: Oct 14 05:39:15 localhost podman[221772]: 2025-10-14 09:39:15.227188364 +0000 UTC m=+0.076144975 container create 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, managed_by=edpm_ansible) Oct 14 05:39:15 localhost podman[221772]: 2025-10-14 09:39:15.193653497 +0000 UTC m=+0.042610138 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 14 05:39:15 localhost python3[221713]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 14 05:39:16 localhost python3.9[221921]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:39:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29420 DF PROTO=TCP SPT=50278 DPT=9101 SEQ=71776917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761792E90000000001030307) Oct 14 05:39:17 localhost python3.9[222033]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:18 localhost python3.9[222088]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:39:19 localhost python3.9[222197]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434758.5206277-2390-263424606417572/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:39:20 localhost systemd[1]: tmp-crun.xgwXnq.mount: Deactivated successfully. Oct 14 05:39:20 localhost podman[222253]: 2025-10-14 09:39:20.417518223 +0000 UTC m=+0.098653061 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:39:20 localhost podman[222253]: 2025-10-14 09:39:20.430006072 +0000 UTC m=+0.111140900 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 05:39:20 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:39:20 localhost python3.9[222252]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:39:20 localhost systemd[1]: Reloading. Oct 14 05:39:20 localhost systemd-rc-local-generator[222294]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:20 localhost systemd-sysv-generator[222300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29421 DF PROTO=TCP SPT=50278 DPT=9101 SEQ=71776917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617A2AA0000000001030307) Oct 14 05:39:21 localhost python3.9[222362]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:39:21 localhost systemd[1]: Reloading. Oct 14 05:39:21 localhost systemd-rc-local-generator[222389]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:21 localhost systemd-sysv-generator[222392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:22 localhost systemd[1]: Starting multipathd container... Oct 14 05:39:22 localhost systemd[1]: tmp-crun.T0CKjT.mount: Deactivated successfully. Oct 14 05:39:22 localhost systemd[1]: Started libcrun container. Oct 14 05:39:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb09ed2f9661334ad2b0780a0c26401517a74a1c2efdbbe77961a38ed37ec3dc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 14 05:39:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb09ed2f9661334ad2b0780a0c26401517a74a1c2efdbbe77961a38ed37ec3dc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:22 localhost podman[222402]: 2025-10-14 09:39:22.179127418 +0000 UTC m=+0.125651503 container init 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:39:22 localhost multipathd[222417]: + sudo -E kolla_set_configs Oct 14 05:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:22 localhost podman[222402]: 2025-10-14 09:39:22.212839796 +0000 UTC m=+0.159363851 container start 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:39:22 localhost podman[222402]: multipathd Oct 14 05:39:22 localhost systemd[1]: Started multipathd container. Oct 14 05:39:22 localhost multipathd[222417]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:39:22 localhost multipathd[222417]: INFO:__main__:Validating config file Oct 14 05:39:22 localhost multipathd[222417]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:39:22 localhost multipathd[222417]: INFO:__main__:Writing out command to execute Oct 14 05:39:22 localhost multipathd[222417]: ++ cat /run_command Oct 14 05:39:22 localhost multipathd[222417]: + CMD='/usr/sbin/multipathd -d' Oct 14 05:39:22 localhost multipathd[222417]: + ARGS= Oct 14 05:39:22 localhost multipathd[222417]: + sudo kolla_copy_cacerts Oct 14 05:39:22 localhost multipathd[222417]: + [[ ! -n '' ]] Oct 14 05:39:22 localhost multipathd[222417]: + . kolla_extend_start Oct 14 05:39:22 localhost multipathd[222417]: Running command: '/usr/sbin/multipathd -d' Oct 14 05:39:22 localhost multipathd[222417]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 14 05:39:22 localhost multipathd[222417]: + umask 0022 Oct 14 05:39:22 localhost multipathd[222417]: + exec /usr/sbin/multipathd -d Oct 14 05:39:22 localhost podman[222425]: 2025-10-14 09:39:22.312608676 +0000 UTC m=+0.093134065 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:39:22 localhost multipathd[222417]: 10849.500305 | --------start up-------- Oct 14 05:39:22 localhost multipathd[222417]: 10849.500329 | read /etc/multipath.conf Oct 14 05:39:22 localhost multipathd[222417]: 10849.504427 | path checkers start up Oct 14 05:39:22 localhost podman[222425]: 2025-10-14 09:39:22.351132381 +0000 UTC m=+0.131657730 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:39:22 localhost podman[222425]: unhealthy Oct 14 05:39:22 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:39:22 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Failed with result 'exit-code'. Oct 14 05:39:22 localhost python3.9[222562]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:39:23 localhost python3.9[222674]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:39:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49298 DF PROTO=TCP SPT=54464 DPT=9102 SEQ=1300356602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617AC2E0000000001030307) Oct 14 05:39:24 localhost python3.9[222797]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:39:24 localhost systemd[1]: Stopping multipathd container... Oct 14 05:39:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49299 DF PROTO=TCP SPT=54464 DPT=9102 SEQ=1300356602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617B0290000000001030307) Oct 14 05:39:24 localhost multipathd[222417]: 10852.000390 | exit (signal) Oct 14 05:39:24 localhost multipathd[222417]: 10852.000916 | --------shut down------- Oct 14 05:39:24 localhost systemd[1]: libpod-6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.scope: Deactivated successfully. Oct 14 05:39:24 localhost podman[222801]: 2025-10-14 09:39:24.855005437 +0000 UTC m=+0.103778946 container died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:39:24 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.timer: Deactivated successfully. Oct 14 05:39:24 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:24 localhost systemd[1]: tmp-crun.y0SOoJ.mount: Deactivated successfully. Oct 14 05:39:24 localhost podman[222801]: 2025-10-14 09:39:24.924695133 +0000 UTC m=+0.173468632 container cleanup 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 05:39:24 localhost podman[222801]: multipathd Oct 14 05:39:25 localhost podman[222830]: 2025-10-14 09:39:25.04828889 +0000 UTC m=+0.090831994 container cleanup 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:39:25 localhost podman[222830]: multipathd Oct 14 05:39:25 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Oct 14 05:39:25 localhost systemd[1]: Stopped multipathd container. Oct 14 05:39:25 localhost systemd[1]: Starting multipathd container... Oct 14 05:39:25 localhost systemd[1]: Started libcrun container. Oct 14 05:39:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb09ed2f9661334ad2b0780a0c26401517a74a1c2efdbbe77961a38ed37ec3dc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 14 05:39:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb09ed2f9661334ad2b0780a0c26401517a74a1c2efdbbe77961a38ed37ec3dc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:39:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:25 localhost podman[222841]: 2025-10-14 09:39:25.219639307 +0000 UTC m=+0.141026918 container init 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:39:25 localhost multipathd[222855]: + sudo -E kolla_set_configs Oct 14 05:39:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:25 localhost podman[222841]: 2025-10-14 09:39:25.254238128 +0000 UTC m=+0.175625739 container start 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 05:39:25 localhost podman[222841]: multipathd Oct 14 05:39:25 localhost systemd[1]: Started multipathd container. Oct 14 05:39:25 localhost multipathd[222855]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:39:25 localhost multipathd[222855]: INFO:__main__:Validating config file Oct 14 05:39:25 localhost multipathd[222855]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:39:25 localhost multipathd[222855]: INFO:__main__:Writing out command to execute Oct 14 05:39:25 localhost multipathd[222855]: ++ cat /run_command Oct 14 05:39:25 localhost multipathd[222855]: + CMD='/usr/sbin/multipathd -d' Oct 14 05:39:25 localhost multipathd[222855]: + ARGS= Oct 14 05:39:25 localhost multipathd[222855]: + sudo kolla_copy_cacerts Oct 14 05:39:25 localhost multipathd[222855]: + [[ ! -n '' ]] Oct 14 05:39:25 localhost multipathd[222855]: + . kolla_extend_start Oct 14 05:39:25 localhost multipathd[222855]: Running command: '/usr/sbin/multipathd -d' Oct 14 05:39:25 localhost multipathd[222855]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 14 05:39:25 localhost multipathd[222855]: + umask 0022 Oct 14 05:39:25 localhost multipathd[222855]: + exec /usr/sbin/multipathd -d Oct 14 05:39:25 localhost multipathd[222855]: 10852.528651 | --------start up-------- Oct 14 05:39:25 localhost multipathd[222855]: 10852.528671 | read /etc/multipath.conf Oct 14 05:39:25 localhost multipathd[222855]: 10852.533066 | path checkers start up Oct 14 05:39:25 localhost podman[222863]: 2025-10-14 09:39:25.370671147 +0000 UTC m=+0.112608519 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3) Oct 14 05:39:25 localhost podman[222863]: 2025-10-14 09:39:25.382073178 +0000 UTC m=+0.124010550 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 05:39:25 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:39:26 localhost python3.9[223001]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:26 localhost python3.9[223111]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:39:27 localhost python3.9[223221]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 14 05:39:28 localhost python3.9[223339]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:39:29 localhost python3.9[223427]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434768.0755563-2630-46771999283955/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31443 DF PROTO=TCP SPT=49012 DPT=9100 SEQ=2566375644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617C6130000000001030307) Oct 14 05:39:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49514 DF PROTO=TCP SPT=42476 DPT=9882 SEQ=3533479429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617C6B90000000001030307) Oct 14 05:39:30 localhost python3.9[223537]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:39:31 localhost systemd[1]: tmp-crun.jNCAvl.mount: Deactivated successfully. Oct 14 05:39:31 localhost podman[223647]: 2025-10-14 09:39:31.332158475 +0000 UTC m=+0.096024513 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:39:31 localhost podman[223647]: 2025-10-14 09:39:31.377257253 +0000 UTC m=+0.141123321 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:39:31 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:39:31 localhost python3.9[223648]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:39:31 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 14 05:39:31 localhost systemd[1]: Stopped Load Kernel Modules. Oct 14 05:39:31 localhost systemd[1]: Stopping Load Kernel Modules... Oct 14 05:39:31 localhost systemd[1]: Starting Load Kernel Modules... Oct 14 05:39:31 localhost systemd-modules-load[223674]: Module 'msr' is built in Oct 14 05:39:31 localhost systemd[1]: Finished Load Kernel Modules. Oct 14 05:39:32 localhost python3.9[223784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:39:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31445 DF PROTO=TCP SPT=49012 DPT=9100 SEQ=2566375644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617D2290000000001030307) Oct 14 05:39:33 localhost python3.9[223847]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:39:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:39:35 localhost systemd[1]: tmp-crun.oDY6v0.mount: Deactivated successfully. Oct 14 05:39:35 localhost podman[223850]: 2025-10-14 09:39:35.554363146 +0000 UTC m=+0.096419603 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:39:35 localhost podman[223850]: 2025-10-14 09:39:35.584250632 +0000 UTC m=+0.126307069 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:39:35 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:39:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31446 DF PROTO=TCP SPT=49012 DPT=9100 SEQ=2566375644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617E1E90000000001030307) Oct 14 05:39:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20210 DF PROTO=TCP SPT=52124 DPT=9105 SEQ=2280680125 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617ED290000000001030307) Oct 14 05:39:41 localhost systemd[1]: Reloading. Oct 14 05:39:41 localhost systemd-rc-local-generator[223899]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:41 localhost systemd-sysv-generator[223902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:42 localhost systemd[1]: Reloading. Oct 14 05:39:42 localhost systemd-rc-local-generator[223933]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:42 localhost systemd-sysv-generator[223937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:42 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 14 05:39:42 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 14 05:39:42 localhost lvm[223981]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 14 05:39:42 localhost lvm[223981]: VG ceph_vg1 finished Oct 14 05:39:42 localhost lvm[223982]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 14 05:39:42 localhost lvm[223982]: VG ceph_vg0 finished Oct 14 05:39:42 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 14 05:39:42 localhost systemd[1]: Starting man-db-cache-update.service... Oct 14 05:39:42 localhost systemd[1]: Reloading. Oct 14 05:39:42 localhost systemd-sysv-generator[224037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:42 localhost systemd-rc-local-generator[224032]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:43 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 14 05:39:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 14 05:39:44 localhost systemd[1]: Finished man-db-cache-update.service. Oct 14 05:39:44 localhost systemd[1]: man-db-cache-update.service: Consumed 1.448s CPU time. Oct 14 05:39:44 localhost systemd[1]: run-r14ee55b6e6c645348b547c85eaeaf943.service: Deactivated successfully. Oct 14 05:39:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4077 DF PROTO=TCP SPT=44996 DPT=9101 SEQ=2362505497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7617FBF30000000001030307) Oct 14 05:39:45 localhost python3.9[225281]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:46 localhost python3.9[225389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:39:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4079 DF PROTO=TCP SPT=44996 DPT=9101 SEQ=2362505497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761807E90000000001030307) Oct 14 05:39:47 localhost python3.9[225503]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:39:49 localhost python3.9[225613]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:39:49 localhost systemd[1]: Reloading. Oct 14 05:39:49 localhost systemd-rc-local-generator[225636]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:39:49 localhost systemd-sysv-generator[225641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:39:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:50 localhost python3.9[225757]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:39:50 localhost network[225774]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:39:50 localhost network[225775]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:39:50 localhost network[225776]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:39:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:39:50 localhost podman[225782]: 2025-10-14 09:39:50.566208234 +0000 UTC m=+0.085243768 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:39:50 localhost podman[225782]: 2025-10-14 09:39:50.580149841 +0000 UTC m=+0.099185365 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:39:51 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:39:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4080 DF PROTO=TCP SPT=44996 DPT=9101 SEQ=2362505497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761817A90000000001030307) Oct 14 05:39:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:39:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64159 DF PROTO=TCP SPT=55070 DPT=9102 SEQ=3225294048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618215E0000000001030307) Oct 14 05:39:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64160 DF PROTO=TCP SPT=55070 DPT=9102 SEQ=3225294048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761825690000000001030307) Oct 14 05:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:39:55 localhost podman[225991]: 2025-10-14 09:39:55.536506089 +0000 UTC m=+0.080472601 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 05:39:55 localhost podman[225991]: 2025-10-14 09:39:55.548259349 +0000 UTC m=+0.092225881 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:39:55 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:39:56 localhost python3.9[226048]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:39:57 localhost python3.9[226159]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:39:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:39:57.595 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:39:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:39:57.595 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:39:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:39:57.595 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:39:57 localhost python3.9[226270]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:39:59 localhost python3.9[226381]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:39:59 localhost python3.9[226492]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:40:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36361 DF PROTO=TCP SPT=45504 DPT=9100 SEQ=2966201193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76183B440000000001030307) Oct 14 05:40:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5025 DF PROTO=TCP SPT=48884 DPT=9882 SEQ=193609462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76183BE90000000001030307) Oct 14 05:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:40:01 localhost podman[226604]: 2025-10-14 09:40:01.550627384 +0000 UTC m=+0.084686893 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:40:01 localhost podman[226604]: 2025-10-14 09:40:01.594069828 +0000 UTC m=+0.128129357 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:40:01 localhost python3.9[226603]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:40:01 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:40:03 localhost python3.9[226739]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:40:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36363 DF PROTO=TCP SPT=45504 DPT=9100 SEQ=2966201193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761847690000000001030307) Oct 14 05:40:04 localhost python3.9[226850]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:40:05 localhost python3.9[226961]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:05 localhost sshd[226962]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:40:05 localhost podman[227052]: 2025-10-14 09:40:05.768080049 +0000 UTC m=+0.076493207 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 14 05:40:05 localhost podman[227052]: 2025-10-14 09:40:05.807076707 +0000 UTC m=+0.115489835 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:40:05 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:40:05 localhost python3.9[227091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:06 localhost python3.9[227201]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:07 localhost python3.9[227311]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36364 DF PROTO=TCP SPT=45504 DPT=9100 SEQ=2966201193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761857290000000001030307) Oct 14 05:40:07 localhost python3.9[227421]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:08 localhost python3.9[227531]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:09 localhost python3.9[227641]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3119 DF PROTO=TCP SPT=48242 DPT=9105 SEQ=1983780385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618626A0000000001030307) Oct 14 05:40:10 localhost python3.9[227819]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:11 localhost python3.9[227947]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:11 localhost python3.9[228057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:12 localhost python3.9[228167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:13 localhost python3.9[228277]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57634 DF PROTO=TCP SPT=57168 DPT=9101 SEQ=1024391088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761871220000000001030307) Oct 14 05:40:14 localhost python3.9[228387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:14 localhost python3.9[228497]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:15 localhost python3.9[228607]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:16 localhost python3.9[228717]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:16 localhost python3.9[228827]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57636 DF PROTO=TCP SPT=57168 DPT=9101 SEQ=1024391088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76187D290000000001030307) Oct 14 05:40:17 localhost python3.9[228937]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:40:18 localhost python3.9[229047]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:40:18 localhost systemd[1]: Reloading. Oct 14 05:40:19 localhost systemd-sysv-generator[229077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:40:19 localhost systemd-rc-local-generator[229072]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:40:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:40:19 localhost python3.9[229193]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57637 DF PROTO=TCP SPT=57168 DPT=9101 SEQ=1024391088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76188CE90000000001030307) Oct 14 05:40:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:40:21 localhost systemd[1]: tmp-crun.LcUJWy.mount: Deactivated successfully. Oct 14 05:40:21 localhost podman[229261]: 2025-10-14 09:40:21.575912747 +0000 UTC m=+0.100360725 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.vendor=CentOS) Oct 14 05:40:21 localhost podman[229261]: 2025-10-14 09:40:21.618224595 +0000 UTC m=+0.142672573 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 05:40:21 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:40:21 localhost python3.9[229324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:22 localhost python3.9[229435]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59196 DF PROTO=TCP SPT=38256 DPT=9102 SEQ=3514712096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618968D0000000001030307) Oct 14 05:40:24 localhost python3.9[229546]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59197 DF PROTO=TCP SPT=38256 DPT=9102 SEQ=3514712096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76189AA90000000001030307) Oct 14 05:40:24 localhost python3.9[229657]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:25 localhost python3.9[229768]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:40:25 localhost podman[229770]: 2025-10-14 09:40:25.697191001 +0000 UTC m=+0.073385354 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 05:40:25 localhost podman[229770]: 2025-10-14 09:40:25.71417496 +0000 UTC m=+0.090369343 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:40:25 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:40:26 localhost python3.9[229898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:26 localhost python3.9[230009]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:40:28 localhost python3.9[230120]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:29 localhost python3.9[230230]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:30 localhost python3.9[230340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53035 DF PROTO=TCP SPT=58518 DPT=9100 SEQ=2498246492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618B0740000000001030307) Oct 14 05:40:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46582 DF PROTO=TCP SPT=56406 DPT=9882 SEQ=258743635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618B11A0000000001030307) Oct 14 05:40:30 localhost python3.9[230450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:31 localhost python3.9[230560]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:40:31 localhost podman[230671]: 2025-10-14 09:40:31.908982737 +0000 UTC m=+0.079041808 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller) Oct 14 05:40:31 localhost podman[230671]: 2025-10-14 09:40:31.9495341 +0000 UTC m=+0.119593151 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0) Oct 14 05:40:31 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:40:32 localhost python3.9[230670]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:32 localhost python3.9[230804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:33 localhost python3.9[230914]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53037 DF PROTO=TCP SPT=58518 DPT=9100 SEQ=2498246492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618BC690000000001030307) Oct 14 05:40:34 localhost python3.9[231024]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:34 localhost python3.9[231134]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:40:36 localhost podman[231245]: 2025-10-14 09:40:36.060087312 +0000 UTC m=+0.079271912 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:40:36 localhost podman[231245]: 2025-10-14 09:40:36.092257755 +0000 UTC m=+0.111442315 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:40:36 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:40:36 localhost python3.9[231244]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:36 localhost python3.9[231371]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53038 DF PROTO=TCP SPT=58518 DPT=9100 SEQ=2498246492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618CC290000000001030307) Oct 14 05:40:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44138 DF PROTO=TCP SPT=37340 DPT=9105 SEQ=2147579508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618D7690000000001030307) Oct 14 05:40:43 localhost python3.9[231481]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 14 05:40:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57837 DF PROTO=TCP SPT=40318 DPT=9101 SEQ=3418931153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618E6520000000001030307) Oct 14 05:40:44 localhost python3.9[231592]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 14 05:40:45 localhost python3.9[231708]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 14 05:40:46 localhost sshd[231734]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:40:46 localhost systemd-logind[760]: New session 56 of user zuul. Oct 14 05:40:46 localhost systemd[1]: Started Session 56 of User zuul. Oct 14 05:40:46 localhost systemd[1]: session-56.scope: Deactivated successfully. Oct 14 05:40:46 localhost systemd-logind[760]: Session 56 logged out. Waiting for processes to exit. Oct 14 05:40:47 localhost systemd-logind[760]: Removed session 56. Oct 14 05:40:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57839 DF PROTO=TCP SPT=40318 DPT=9101 SEQ=3418931153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7618F2690000000001030307) Oct 14 05:40:47 localhost python3.9[231845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:48 localhost python3.9[231931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434847.1843846-4267-200865937632362/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:48 localhost python3.9[232039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:49 localhost python3.9[232094]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:50 localhost python3.9[232202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:51 localhost python3.9[232288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434849.5386286-4267-105351239542644/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57840 DF PROTO=TCP SPT=40318 DPT=9101 SEQ=3418931153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761902290000000001030307) Oct 14 05:40:52 localhost python3.9[232396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:40:52 localhost podman[232446]: 2025-10-14 09:40:52.548169922 +0000 UTC m=+0.086914476 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:40:52 localhost podman[232446]: 2025-10-14 09:40:52.583231217 +0000 UTC m=+0.121975721 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:40:52 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:40:53 localhost python3.9[232502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434851.4971333-4267-238073197968198/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=80098a213e897ecefc50c1420f932ebe70b1fea3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4801 DF PROTO=TCP SPT=54904 DPT=9102 SEQ=3781536638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76190BBE0000000001030307) Oct 14 05:40:53 localhost python3.9[232610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:54 localhost python3.9[232696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434853.4184036-4267-58804993800618/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:40:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4802 DF PROTO=TCP SPT=54904 DPT=9102 SEQ=3781536638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76190FA90000000001030307) Oct 14 05:40:55 localhost python3.9[232806]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:40:55 localhost podman[232917]: 2025-10-14 09:40:55.887546984 +0000 UTC m=+0.081709484 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Oct 14 05:40:55 localhost podman[232917]: 2025-10-14 09:40:55.901074766 +0000 UTC m=+0.095237236 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:40:55 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:40:56 localhost python3.9[232916]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:56 localhost python3.9[233046]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:40:57 localhost python3.9[233158]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:40:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:40:57.596 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:40:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:40:57.597 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:40:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:40:57.597 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:40:58 localhost python3.9[233266]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:40:59 localhost python3.9[233376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:40:59 localhost python3.9[233462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434858.5858035-4600-128921436851371/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:41:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17445 DF PROTO=TCP SPT=43120 DPT=9100 SEQ=2162401541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761925A40000000001030307) Oct 14 05:41:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36982 DF PROTO=TCP SPT=50976 DPT=9882 SEQ=2002286545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761926490000000001030307) Oct 14 05:41:00 localhost python3.9[233570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:41:01 localhost python3.9[233656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434860.0955777-4645-170998400467463/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:41:02 localhost python3.9[233766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 14 05:41:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:41:02 localhost podman[233822]: 2025-10-14 09:41:02.552678135 +0000 UTC m=+0.084233318 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 05:41:02 localhost podman[233822]: 2025-10-14 09:41:02.625199246 +0000 UTC m=+0.156754399 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 05:41:02 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:41:02 localhost python3.9[233902]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:41:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17447 DF PROTO=TCP SPT=43120 DPT=9100 SEQ=2162401541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761931A90000000001030307) Oct 14 05:41:04 localhost python3[234012]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:41:06 localhost podman[234039]: 2025-10-14 09:41:06.54963058 +0000 UTC m=+0.089879931 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 05:41:06 localhost podman[234039]: 2025-10-14 09:41:06.587039054 +0000 UTC m=+0.127288435 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 05:41:06 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:41:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17448 DF PROTO=TCP SPT=43120 DPT=9100 SEQ=2162401541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761941690000000001030307) Oct 14 05:41:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33781 DF PROTO=TCP SPT=38104 DPT=9105 SEQ=776340024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76194CA90000000001030307) Oct 14 05:41:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62160 DF PROTO=TCP SPT=45282 DPT=9101 SEQ=346798064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76195B830000000001030307) Oct 14 05:41:15 localhost podman[234026]: 2025-10-14 09:41:04.418531054 +0000 UTC m=+0.051691226 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 14 05:41:15 localhost podman[234278]: Oct 14 05:41:15 localhost podman[234278]: 2025-10-14 09:41:15.700627374 +0000 UTC m=+0.074814100 container create 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:41:15 localhost podman[234278]: 2025-10-14 09:41:15.660208463 +0000 UTC m=+0.034395219 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 14 05:41:15 localhost python3[234012]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Oct 14 05:41:16 localhost python3.9[234651]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:41:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62162 DF PROTO=TCP SPT=45282 DPT=9101 SEQ=346798064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761967AA0000000001030307) Oct 14 05:41:18 localhost python3.9[234763]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 14 05:41:19 localhost python3.9[234873]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:41:21 localhost python3[234983]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:41:21 localhost python3[234983]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b5b57d3572ac74b7c41332c066527d5039dbd47e134e43d7cb5d76b7732d99f5",#012 "Digest": "sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-13T12:50:19.385564198Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207014273,#012 "VirtualSize": 1207014273,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36/diff:/var/lib/containers/storage/overlay/0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861/diff:/var/lib/containers/storage/overlay/ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2c35d1af0a6e73cbcf6c04a576d2e6a150aeaa6ae9408c81b2003edd71d6ae59",#012 "sha256:3ad61591f8d467f7db4e096e1991f274fe1d4f8ad685b553dacb57c5e894eab0",#012 "sha256:e0ba9b00dd1340fa4eba9e9cd5f316c11381d47a31460e5b834a6ca56f60033f",#012 "sha256:731e9354c974a424a2f6724faa85f84baef270eb006be0de18bbdc87ff420f97"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-13T12:28:42.843286399Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843354051Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843394192Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843417133Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843442193Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843461914Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:43.236856724Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:17.539596691Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 14 05:41:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62163 DF PROTO=TCP SPT=45282 DPT=9101 SEQ=346798064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761977690000000001030307) Oct 14 05:41:21 localhost podman[235034]: 2025-10-14 09:41:21.416536029 +0000 UTC m=+0.094079716 container remove a6e0ba4b26389ee17e6ca051eecb56ebb82ef586d309b0a732e9e898fa5d847e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.9, release=1, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bd9d045a0b37801182392caf49375c15-f5be0e0347f8a081fe8927c6f95950cc'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_compute) Oct 14 05:41:21 localhost python3[234983]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Oct 14 05:41:21 localhost podman[235048]: Oct 14 05:41:21 localhost podman[235048]: 2025-10-14 09:41:21.520929864 +0000 UTC m=+0.087005938 container create 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:41:21 localhost podman[235048]: 2025-10-14 09:41:21.481055018 +0000 UTC m=+0.047131102 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 14 05:41:21 localhost python3[234983]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Oct 14 05:41:22 localhost python3.9[235194]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:41:23 localhost systemd[1]: tmp-crun.454aeD.mount: Deactivated successfully. Oct 14 05:41:23 localhost podman[235306]: 2025-10-14 09:41:23.454645398 +0000 UTC m=+0.092356183 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:41:23 localhost podman[235306]: 2025-10-14 09:41:23.496139714 +0000 UTC m=+0.133850479 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:41:23 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:41:23 localhost python3.9[235307]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:41:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55527 DF PROTO=TCP SPT=55538 DPT=9102 SEQ=3329265209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761980EE0000000001030307) Oct 14 05:41:24 localhost python3.9[235432]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434883.6537433-4921-29981601873662/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:41:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55528 DF PROTO=TCP SPT=55538 DPT=9102 SEQ=3329265209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761984E90000000001030307) Oct 14 05:41:25 localhost python3.9[235487]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:41:25 localhost systemd[1]: Reloading. Oct 14 05:41:25 localhost systemd-rc-local-generator[235509]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:41:25 localhost systemd-sysv-generator[235513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:41:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:41:26 localhost python3.9[235578]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:41:26 localhost podman[235580]: 2025-10-14 09:41:26.167814296 +0000 UTC m=+0.084018907 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, config_id=multipathd, tcib_managed=true) Oct 14 05:41:26 localhost podman[235580]: 2025-10-14 09:41:26.184015239 +0000 UTC m=+0.100219860 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:41:26 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:41:27 localhost systemd[1]: Reloading. Oct 14 05:41:27 localhost systemd-rc-local-generator[235621]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:41:27 localhost systemd-sysv-generator[235626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:41:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:41:27 localhost systemd[1]: Starting nova_compute container... Oct 14 05:41:27 localhost systemd[1]: tmp-crun.LPXMUE.mount: Deactivated successfully. Oct 14 05:41:27 localhost systemd[1]: Started libcrun container. Oct 14 05:41:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:27 localhost podman[235638]: 2025-10-14 09:41:27.660599973 +0000 UTC m=+0.138643944 container init 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 05:41:27 localhost podman[235638]: 2025-10-14 09:41:27.670517502 +0000 UTC m=+0.148561473 container start 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 05:41:27 localhost podman[235638]: nova_compute Oct 14 05:41:27 localhost nova_compute[235653]: + sudo -E kolla_set_configs Oct 14 05:41:27 localhost systemd[1]: Started nova_compute container. Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Validating config file Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying service configuration files Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Deleting /etc/ceph Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Creating directory /etc/ceph Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/ceph Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Writing out command to execute Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:27 localhost nova_compute[235653]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:41:27 localhost nova_compute[235653]: ++ cat /run_command Oct 14 05:41:27 localhost nova_compute[235653]: + CMD=nova-compute Oct 14 05:41:27 localhost nova_compute[235653]: + ARGS= Oct 14 05:41:27 localhost nova_compute[235653]: + sudo kolla_copy_cacerts Oct 14 05:41:27 localhost nova_compute[235653]: + [[ ! -n '' ]] Oct 14 05:41:27 localhost nova_compute[235653]: + . kolla_extend_start Oct 14 05:41:27 localhost nova_compute[235653]: Running command: 'nova-compute' Oct 14 05:41:27 localhost nova_compute[235653]: + echo 'Running command: '\''nova-compute'\''' Oct 14 05:41:27 localhost nova_compute[235653]: + umask 0022 Oct 14 05:41:27 localhost nova_compute[235653]: + exec nova-compute Oct 14 05:41:28 localhost python3.9[235773]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.416 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.416 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.417 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.417 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.527 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:29 localhost nova_compute[235653]: 2025-10-14 09:41:29.550 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.013 2 INFO nova.virt.driver [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.126 2 INFO nova.compute.provider_config [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.136 2 WARNING nova.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.136 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.136 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.137 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.138 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.138 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.138 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.138 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.138 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.139 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.140 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.140 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.140 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.140 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] console_host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.140 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.141 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.142 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.142 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.142 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.142 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.142 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.143 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.144 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.144 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.144 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.144 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.145 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.146 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.146 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.146 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.146 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.146 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.147 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.147 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.147 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.147 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.147 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.148 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.149 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.150 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.150 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.150 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.150 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.150 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.151 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.152 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.152 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.152 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.152 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.152 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.153 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.154 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.154 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.154 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.154 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.154 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.155 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.156 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.157 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.157 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.157 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.157 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.157 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.158 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.159 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.159 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.159 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.159 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.159 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.160 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.161 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.162 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.162 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.162 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.162 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.162 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.163 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.164 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.164 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.164 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.165 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.165 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.165 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.166 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.166 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.166 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.167 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.167 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.167 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.167 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.168 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.168 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.168 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.169 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.169 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.169 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.169 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.169 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.170 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.171 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.172 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.173 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.174 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.175 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.176 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.177 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.178 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.179 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.180 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.181 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.182 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.183 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.184 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.185 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.186 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.187 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.188 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.189 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.190 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.191 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.192 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.192 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.193 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.193 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.193 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.194 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.194 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.194 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.195 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.195 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.195 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.196 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.196 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.196 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.196 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.196 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.197 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.198 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.199 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.200 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.201 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.202 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.203 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.204 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.205 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.206 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.207 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.208 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.209 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.210 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.211 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.212 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.213 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.214 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.215 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 WARNING oslo_config.cfg [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 14 05:41:30 localhost nova_compute[235653]: live_migration_uri is deprecated for removal in favor of two other options that Oct 14 05:41:30 localhost nova_compute[235653]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 14 05:41:30 localhost nova_compute[235653]: and ``live_migration_inbound_addr`` respectively. Oct 14 05:41:30 localhost nova_compute[235653]: ). Its value may be silently ignored in the future.#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.216 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.217 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.218 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rbd_secret_uuid = fcadf6e2-9176-5818-a8d0-37b19acf8eaf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.219 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.220 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.220 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.220 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.220 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.220 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.221 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.222 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.223 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.224 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.225 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.226 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.227 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.228 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.229 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.230 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.231 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.232 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.233 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.234 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.235 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.236 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.237 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.238 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.239 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.240 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.241 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.242 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.243 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.244 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.245 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.246 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.247 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.248 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.249 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.250 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.251 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.252 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.253 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.254 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.255 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.256 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.257 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.258 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.259 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.260 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.261 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.262 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.263 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.264 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.265 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.266 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.267 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.268 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.269 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.270 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.271 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.272 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.273 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.273 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.273 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.273 2 DEBUG oslo_service.service [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.274 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.317 2 INFO nova.virt.node [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.317 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.318 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.318 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.318 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 14 05:41:30 localhost systemd[1]: Starting libvirt QEMU daemon... Oct 14 05:41:30 localhost systemd[1]: Started libvirt QEMU daemon. Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.382 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.386 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.387 2 INFO nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Connection event '1' reason 'None'#033[00m Oct 14 05:41:30 localhost nova_compute[235653]: 2025-10-14 09:41:30.404 2 DEBUG nova.virt.libvirt.volume.mount [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 14 05:41:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45456 DF PROTO=TCP SPT=41982 DPT=9100 SEQ=4217310564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76199AD30000000001030307) Oct 14 05:41:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10875 DF PROTO=TCP SPT=53008 DPT=9882 SEQ=2960102090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76199B7A0000000001030307) Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.307 2 INFO nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Libvirt host capabilities Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: adf6dc17-eeaa-420b-a893-ea8f9e53b331 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: x86_64 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v4 Oct 14 05:41:31 localhost nova_compute[235653]: AMD Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: tcp Oct 14 05:41:31 localhost nova_compute[235653]: rdma Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 16116612 Oct 14 05:41:31 localhost nova_compute[235653]: 4029153 Oct 14 05:41:31 localhost nova_compute[235653]: 0 Oct 14 05:41:31 localhost nova_compute[235653]: 0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: selinux Oct 14 05:41:31 localhost nova_compute[235653]: 0 Oct 14 05:41:31 localhost nova_compute[235653]: system_u:system_r:svirt_t:s0 Oct 14 05:41:31 localhost nova_compute[235653]: system_u:system_r:svirt_tcg_t:s0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: dac Oct 14 05:41:31 localhost nova_compute[235653]: 0 Oct 14 05:41:31 localhost nova_compute[235653]: +107:+107 Oct 14 05:41:31 localhost nova_compute[235653]: +107:+107 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: hvm Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 32 Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-i440fx-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: q35 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.4.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.5.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.3.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.4.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.2.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.2.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.0.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.0.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.1.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: hvm Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 64 Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-i440fx-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: q35 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.4.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.5.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.3.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.4.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.2.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.2.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.0.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.0.0 Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel8.1.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: #033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.318 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.339 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: i686 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: rom Oct 14 05:41:31 localhost nova_compute[235653]: pflash Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: yes Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: AMD Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 486 Oct 14 05:41:31 localhost nova_compute[235653]: 486-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Conroe Oct 14 05:41:31 localhost nova_compute[235653]: Conroe-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-IBPB Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v4 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v1 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v2 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v6 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v7 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Penryn Oct 14 05:41:31 localhost nova_compute[235653]: Penryn-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Westmere Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v2 Oct 14 05:41:31 localhost nova_compute[235653]: athlon Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: athlon-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: kvm32 Oct 14 05:41:31 localhost nova_compute[235653]: kvm32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: n270 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: n270-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pentium Oct 14 05:41:31 localhost nova_compute[235653]: pentium-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: phenom Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: phenom-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu32 Oct 14 05:41:31 localhost nova_compute[235653]: qemu32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: file Oct 14 05:41:31 localhost nova_compute[235653]: anonymous Oct 14 05:41:31 localhost nova_compute[235653]: memfd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: disk Oct 14 05:41:31 localhost nova_compute[235653]: cdrom Oct 14 05:41:31 localhost nova_compute[235653]: floppy Oct 14 05:41:31 localhost nova_compute[235653]: lun Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: fdc Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: sata Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: vnc Oct 14 05:41:31 localhost nova_compute[235653]: egl-headless Oct 14 05:41:31 localhost nova_compute[235653]: dbus Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: subsystem Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: mandatory Oct 14 05:41:31 localhost nova_compute[235653]: requisite Oct 14 05:41:31 localhost nova_compute[235653]: optional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: pci Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: random Oct 14 05:41:31 localhost nova_compute[235653]: egd Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: path Oct 14 05:41:31 localhost nova_compute[235653]: handle Oct 14 05:41:31 localhost nova_compute[235653]: virtiofs Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: tpm-tis Oct 14 05:41:31 localhost nova_compute[235653]: tpm-crb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: emulator Oct 14 05:41:31 localhost nova_compute[235653]: external Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 2.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pty Oct 14 05:41:31 localhost nova_compute[235653]: unix Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: passt Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: isa Oct 14 05:41:31 localhost nova_compute[235653]: hyperv Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: relaxed Oct 14 05:41:31 localhost nova_compute[235653]: vapic Oct 14 05:41:31 localhost nova_compute[235653]: spinlocks Oct 14 05:41:31 localhost nova_compute[235653]: vpindex Oct 14 05:41:31 localhost nova_compute[235653]: runtime Oct 14 05:41:31 localhost nova_compute[235653]: synic Oct 14 05:41:31 localhost nova_compute[235653]: stimer Oct 14 05:41:31 localhost nova_compute[235653]: reset Oct 14 05:41:31 localhost nova_compute[235653]: vendor_id Oct 14 05:41:31 localhost nova_compute[235653]: frequencies Oct 14 05:41:31 localhost nova_compute[235653]: reenlightenment Oct 14 05:41:31 localhost nova_compute[235653]: tlbflush Oct 14 05:41:31 localhost nova_compute[235653]: ipi Oct 14 05:41:31 localhost nova_compute[235653]: avic Oct 14 05:41:31 localhost nova_compute[235653]: emsr_bitmap Oct 14 05:41:31 localhost nova_compute[235653]: xmm_input Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.348 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-i440fx-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: i686 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: rom Oct 14 05:41:31 localhost nova_compute[235653]: pflash Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: yes Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: AMD Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 486 Oct 14 05:41:31 localhost nova_compute[235653]: 486-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Conroe Oct 14 05:41:31 localhost nova_compute[235653]: Conroe-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-IBPB Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v4 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v1 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v2 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v6 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v7 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Penryn Oct 14 05:41:31 localhost nova_compute[235653]: Penryn-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Westmere Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v2 Oct 14 05:41:31 localhost nova_compute[235653]: athlon Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: athlon-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: kvm32 Oct 14 05:41:31 localhost nova_compute[235653]: kvm32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: n270 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: n270-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pentium Oct 14 05:41:31 localhost nova_compute[235653]: pentium-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: phenom Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: phenom-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu32 Oct 14 05:41:31 localhost nova_compute[235653]: qemu32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: file Oct 14 05:41:31 localhost nova_compute[235653]: anonymous Oct 14 05:41:31 localhost nova_compute[235653]: memfd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: disk Oct 14 05:41:31 localhost nova_compute[235653]: cdrom Oct 14 05:41:31 localhost nova_compute[235653]: floppy Oct 14 05:41:31 localhost nova_compute[235653]: lun Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: ide Oct 14 05:41:31 localhost nova_compute[235653]: fdc Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: sata Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: vnc Oct 14 05:41:31 localhost nova_compute[235653]: egl-headless Oct 14 05:41:31 localhost nova_compute[235653]: dbus Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: subsystem Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: mandatory Oct 14 05:41:31 localhost nova_compute[235653]: requisite Oct 14 05:41:31 localhost nova_compute[235653]: optional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: pci Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: random Oct 14 05:41:31 localhost nova_compute[235653]: egd Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: path Oct 14 05:41:31 localhost nova_compute[235653]: handle Oct 14 05:41:31 localhost nova_compute[235653]: virtiofs Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: tpm-tis Oct 14 05:41:31 localhost nova_compute[235653]: tpm-crb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: emulator Oct 14 05:41:31 localhost nova_compute[235653]: external Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 2.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pty Oct 14 05:41:31 localhost nova_compute[235653]: unix Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: passt Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: isa Oct 14 05:41:31 localhost nova_compute[235653]: hyperv Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: relaxed Oct 14 05:41:31 localhost nova_compute[235653]: vapic Oct 14 05:41:31 localhost nova_compute[235653]: spinlocks Oct 14 05:41:31 localhost nova_compute[235653]: vpindex Oct 14 05:41:31 localhost nova_compute[235653]: runtime Oct 14 05:41:31 localhost nova_compute[235653]: synic Oct 14 05:41:31 localhost nova_compute[235653]: stimer Oct 14 05:41:31 localhost nova_compute[235653]: reset Oct 14 05:41:31 localhost nova_compute[235653]: vendor_id Oct 14 05:41:31 localhost nova_compute[235653]: frequencies Oct 14 05:41:31 localhost nova_compute[235653]: reenlightenment Oct 14 05:41:31 localhost nova_compute[235653]: tlbflush Oct 14 05:41:31 localhost nova_compute[235653]: ipi Oct 14 05:41:31 localhost nova_compute[235653]: avic Oct 14 05:41:31 localhost nova_compute[235653]: emsr_bitmap Oct 14 05:41:31 localhost nova_compute[235653]: xmm_input Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.389 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.394 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-q35-rhel9.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: x86_64 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: efi Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: rom Oct 14 05:41:31 localhost nova_compute[235653]: pflash Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: yes Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: yes Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: AMD Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 486 Oct 14 05:41:31 localhost nova_compute[235653]: 486-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Conroe Oct 14 05:41:31 localhost nova_compute[235653]: Conroe-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-IBPB Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v4 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v1 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v2 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v6 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v7 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Penryn Oct 14 05:41:31 localhost nova_compute[235653]: Penryn-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Westmere Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v2 Oct 14 05:41:31 localhost nova_compute[235653]: athlon Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: athlon-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: kvm32 Oct 14 05:41:31 localhost nova_compute[235653]: kvm32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: n270 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: n270-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pentium Oct 14 05:41:31 localhost nova_compute[235653]: pentium-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: phenom Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: phenom-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu32 Oct 14 05:41:31 localhost nova_compute[235653]: qemu32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: file Oct 14 05:41:31 localhost nova_compute[235653]: anonymous Oct 14 05:41:31 localhost nova_compute[235653]: memfd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: disk Oct 14 05:41:31 localhost nova_compute[235653]: cdrom Oct 14 05:41:31 localhost nova_compute[235653]: floppy Oct 14 05:41:31 localhost nova_compute[235653]: lun Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: fdc Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: sata Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: vnc Oct 14 05:41:31 localhost nova_compute[235653]: egl-headless Oct 14 05:41:31 localhost nova_compute[235653]: dbus Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: subsystem Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: mandatory Oct 14 05:41:31 localhost nova_compute[235653]: requisite Oct 14 05:41:31 localhost nova_compute[235653]: optional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: pci Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: random Oct 14 05:41:31 localhost nova_compute[235653]: egd Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: path Oct 14 05:41:31 localhost nova_compute[235653]: handle Oct 14 05:41:31 localhost nova_compute[235653]: virtiofs Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: tpm-tis Oct 14 05:41:31 localhost nova_compute[235653]: tpm-crb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: emulator Oct 14 05:41:31 localhost nova_compute[235653]: external Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 2.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pty Oct 14 05:41:31 localhost nova_compute[235653]: unix Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: passt Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: isa Oct 14 05:41:31 localhost nova_compute[235653]: hyperv Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: relaxed Oct 14 05:41:31 localhost nova_compute[235653]: vapic Oct 14 05:41:31 localhost nova_compute[235653]: spinlocks Oct 14 05:41:31 localhost nova_compute[235653]: vpindex Oct 14 05:41:31 localhost nova_compute[235653]: runtime Oct 14 05:41:31 localhost nova_compute[235653]: synic Oct 14 05:41:31 localhost nova_compute[235653]: stimer Oct 14 05:41:31 localhost nova_compute[235653]: reset Oct 14 05:41:31 localhost nova_compute[235653]: vendor_id Oct 14 05:41:31 localhost nova_compute[235653]: frequencies Oct 14 05:41:31 localhost nova_compute[235653]: reenlightenment Oct 14 05:41:31 localhost nova_compute[235653]: tlbflush Oct 14 05:41:31 localhost nova_compute[235653]: ipi Oct 14 05:41:31 localhost nova_compute[235653]: avic Oct 14 05:41:31 localhost nova_compute[235653]: emsr_bitmap Oct 14 05:41:31 localhost nova_compute[235653]: xmm_input Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.448 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/libexec/qemu-kvm Oct 14 05:41:31 localhost nova_compute[235653]: kvm Oct 14 05:41:31 localhost nova_compute[235653]: pc-i440fx-rhel7.6.0 Oct 14 05:41:31 localhost nova_compute[235653]: x86_64 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: rom Oct 14 05:41:31 localhost nova_compute[235653]: pflash Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: yes Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: no Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: on Oct 14 05:41:31 localhost nova_compute[235653]: off Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: AMD Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 486 Oct 14 05:41:31 localhost nova_compute[235653]: 486-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Broadwell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cascadelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Conroe Oct 14 05:41:31 localhost nova_compute[235653]: Conroe-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Cooperlake-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Denverton-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Dhyana-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Genoa-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-IBPB Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Milan-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-Rome-v4 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v1 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v2 Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: EPYC-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: GraniteRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Haswell-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-noTSX Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v6 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Icelake-Server-v7 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: IvyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: KnightsMill-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Nehalem-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G1-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G4-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Opteron_G5-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Penryn Oct 14 05:41:31 localhost nova_compute[235653]: Penryn-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: SandyBridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SapphireRapids-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: SierraForest-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Client-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-noTSX-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Skylake-Server-v5 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v2 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v3 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Snowridge-v4 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Westmere Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-IBRS Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Westmere-v2 Oct 14 05:41:31 localhost nova_compute[235653]: athlon Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: athlon-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: core2duo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: coreduo-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: kvm32 Oct 14 05:41:31 localhost nova_compute[235653]: kvm32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64 Oct 14 05:41:31 localhost nova_compute[235653]: kvm64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: n270 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: n270-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pentium Oct 14 05:41:31 localhost nova_compute[235653]: pentium-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2 Oct 14 05:41:31 localhost nova_compute[235653]: pentium2-v1 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3 Oct 14 05:41:31 localhost nova_compute[235653]: pentium3-v1 Oct 14 05:41:31 localhost nova_compute[235653]: phenom Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: phenom-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu32 Oct 14 05:41:31 localhost nova_compute[235653]: qemu32-v1 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64 Oct 14 05:41:31 localhost nova_compute[235653]: qemu64-v1 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: file Oct 14 05:41:31 localhost nova_compute[235653]: anonymous Oct 14 05:41:31 localhost nova_compute[235653]: memfd Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: disk Oct 14 05:41:31 localhost nova_compute[235653]: cdrom Oct 14 05:41:31 localhost nova_compute[235653]: floppy Oct 14 05:41:31 localhost nova_compute[235653]: lun Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: ide Oct 14 05:41:31 localhost nova_compute[235653]: fdc Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: sata Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: vnc Oct 14 05:41:31 localhost nova_compute[235653]: egl-headless Oct 14 05:41:31 localhost nova_compute[235653]: dbus Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: subsystem Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: mandatory Oct 14 05:41:31 localhost nova_compute[235653]: requisite Oct 14 05:41:31 localhost nova_compute[235653]: optional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: pci Oct 14 05:41:31 localhost nova_compute[235653]: scsi Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: virtio Oct 14 05:41:31 localhost nova_compute[235653]: virtio-transitional Oct 14 05:41:31 localhost nova_compute[235653]: virtio-non-transitional Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: random Oct 14 05:41:31 localhost nova_compute[235653]: egd Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: path Oct 14 05:41:31 localhost nova_compute[235653]: handle Oct 14 05:41:31 localhost nova_compute[235653]: virtiofs Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: tpm-tis Oct 14 05:41:31 localhost nova_compute[235653]: tpm-crb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: emulator Oct 14 05:41:31 localhost nova_compute[235653]: external Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: 2.0 Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: usb Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: pty Oct 14 05:41:31 localhost nova_compute[235653]: unix Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: qemu Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: builtin Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: default Oct 14 05:41:31 localhost nova_compute[235653]: passt Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: isa Oct 14 05:41:31 localhost nova_compute[235653]: hyperv Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: relaxed Oct 14 05:41:31 localhost nova_compute[235653]: vapic Oct 14 05:41:31 localhost nova_compute[235653]: spinlocks Oct 14 05:41:31 localhost nova_compute[235653]: vpindex Oct 14 05:41:31 localhost nova_compute[235653]: runtime Oct 14 05:41:31 localhost nova_compute[235653]: synic Oct 14 05:41:31 localhost nova_compute[235653]: stimer Oct 14 05:41:31 localhost nova_compute[235653]: reset Oct 14 05:41:31 localhost nova_compute[235653]: vendor_id Oct 14 05:41:31 localhost nova_compute[235653]: frequencies Oct 14 05:41:31 localhost nova_compute[235653]: reenlightenment Oct 14 05:41:31 localhost nova_compute[235653]: tlbflush Oct 14 05:41:31 localhost nova_compute[235653]: ipi Oct 14 05:41:31 localhost nova_compute[235653]: avic Oct 14 05:41:31 localhost nova_compute[235653]: emsr_bitmap Oct 14 05:41:31 localhost nova_compute[235653]: xmm_input Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: Oct 14 05:41:31 localhost nova_compute[235653]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.487 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.487 2 INFO nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Secure Boot support detected#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.490 2 INFO nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.490 2 INFO nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.504 2 DEBUG nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.527 2 INFO nova.virt.node [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.548 2 DEBUG nova.compute.manager [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Verified node ebb6de71-88e5-4477-92fc-f2b9532f7fcd matches my host np0005486731.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 14 05:41:31 localhost nova_compute[235653]: 2025-10-14 09:41:31.579 2 INFO nova.compute.manager [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.280 2 INFO nova.service [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Updating service version for nova-compute on np0005486731.localdomain from 57 to 66#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.307 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.308 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.308 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.308 2 DEBUG nova.compute.resource_tracker [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.308 2 DEBUG oslo_concurrency.processutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:32 localhost python3.9[235950]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:41:32 localhost nova_compute[235653]: 2025-10-14 09:41:32.773 2 DEBUG oslo_concurrency.processutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:41:32 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 14 05:41:32 localhost systemd[1]: Started libvirt nodedev daemon. Oct 14 05:41:32 localhost podman[236028]: 2025-10-14 09:41:32.884780682 +0000 UTC m=+0.073692336 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:41:32 localhost podman[236028]: 2025-10-14 09:41:32.9738501 +0000 UTC m=+0.162761764 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 05:41:32 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.142 2 WARNING nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.143 2 DEBUG nova.compute.resource_tracker [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13559MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.143 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.143 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:41:33 localhost python3.9[236127]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.262 2 DEBUG nova.compute.resource_tracker [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.263 2 DEBUG nova.compute.resource_tracker [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.451 2 DEBUG nova.scheduler.client.report [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.468 2 DEBUG nova.scheduler.client.report [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.469 2 DEBUG nova.compute.provider_tree [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 05:41:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45458 DF PROTO=TCP SPT=41982 DPT=9100 SEQ=4217310564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619A6E90000000001030307) Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.712 2 DEBUG nova.scheduler.client.report [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.734 2 DEBUG nova.scheduler.client.report [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 05:41:33 localhost nova_compute[235653]: 2025-10-14 09:41:33.762 2 DEBUG oslo_concurrency.processutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.213 2 DEBUG oslo_concurrency.processutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.219 2 DEBUG nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 14 05:41:34 localhost nova_compute[235653]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.220 2 INFO nova.virt.libvirt.host [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.221 2 DEBUG nova.compute.provider_tree [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.222 2 DEBUG nova.virt.libvirt.driver [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.251 2 DEBUG nova.scheduler.client.report [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.366 2 DEBUG nova.compute.provider_tree [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Updating resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.397 2 DEBUG nova.compute.resource_tracker [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.398 2 DEBUG oslo_concurrency.lockutils [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.398 2 DEBUG nova.service [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.452 2 DEBUG nova.service [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.452 2 DEBUG nova.servicegroup.drivers.db [None req-07aec30c-fb1d-4a73-88b3-b220006f9d7d - - - - - -] DB_Driver: join new ServiceGroup member np0005486731.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.455 2 DEBUG oslo_service.periodic_task [None req-3bc59178-b415-48e3-94e8-def35e25dea1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:41:34 localhost nova_compute[235653]: 2025-10-14 09:41:34.474 2 DEBUG oslo_service.periodic_task [None req-3bc59178-b415-48e3-94e8-def35e25dea1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:41:34 localhost python3.9[236259]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 14 05:41:34 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 121.6 (405 of 333 items), suggesting rotation. Oct 14 05:41:34 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:41:34 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:41:34 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:41:35 localhost python3.9[236394]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:41:35 localhost systemd[1]: Stopping nova_compute container... Oct 14 05:41:36 localhost systemd[1]: tmp-crun.A6MwPx.mount: Deactivated successfully. Oct 14 05:41:36 localhost nova_compute[235653]: 2025-10-14 09:41:36.523 2 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Oct 14 05:41:36 localhost nova_compute[235653]: 2025-10-14 09:41:36.526 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:41:36 localhost nova_compute[235653]: 2025-10-14 09:41:36.526 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:41:36 localhost nova_compute[235653]: 2025-10-14 09:41:36.526 2 DEBUG oslo_concurrency.lockutils [None req-d84f914a-ca02-45ae-af24-0116f502fd56 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:41:37 localhost systemd[1]: libpod-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89.scope: Deactivated successfully. Oct 14 05:41:37 localhost journal[235816]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 14 05:41:37 localhost journal[235816]: hostname: np0005486731.localdomain Oct 14 05:41:37 localhost journal[235816]: End of file while reading data: Input/output error Oct 14 05:41:37 localhost systemd[1]: libpod-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89.scope: Consumed 3.937s CPU time. Oct 14 05:41:37 localhost podman[236398]: 2025-10-14 09:41:37.056585062 +0000 UTC m=+1.082192659 container died 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm) Oct 14 05:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:41:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89-userdata-shm.mount: Deactivated successfully. Oct 14 05:41:37 localhost systemd[1]: var-lib-containers-storage-overlay-0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866-merged.mount: Deactivated successfully. Oct 14 05:41:37 localhost podman[236419]: 2025-10-14 09:41:37.542790637 +0000 UTC m=+0.468029401 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 05:41:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45459 DF PROTO=TCP SPT=41982 DPT=9100 SEQ=4217310564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619B6A90000000001030307) Oct 14 05:41:37 localhost podman[236419]: 2025-10-14 09:41:37.761513573 +0000 UTC m=+0.686752337 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:41:38 localhost systemd[1]: tmp-crun.6MCTg1.mount: Deactivated successfully. Oct 14 05:41:40 localhost podman[236398]: 2025-10-14 09:41:40.097188213 +0000 UTC m=+4.122795820 container cleanup 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 05:41:40 localhost podman[236398]: nova_compute Oct 14 05:41:40 localhost podman[236412]: 2025-10-14 09:41:40.100342246 +0000 UTC m=+3.039471553 container cleanup 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:41:40 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:41:40 localhost podman[236454]: 2025-10-14 09:41:40.184257219 +0000 UTC m=+0.054057614 container cleanup 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:41:40 localhost podman[236454]: nova_compute Oct 14 05:41:40 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 14 05:41:40 localhost systemd[1]: Stopped nova_compute container. Oct 14 05:41:40 localhost systemd[1]: Starting nova_compute container... Oct 14 05:41:40 localhost systemd[1]: Started libcrun container. Oct 14 05:41:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:40 localhost podman[236465]: 2025-10-14 09:41:40.323541858 +0000 UTC m=+0.112230684 container init 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 05:41:40 localhost nova_compute[236479]: + sudo -E kolla_set_configs Oct 14 05:41:40 localhost podman[236465]: 2025-10-14 09:41:40.337145664 +0000 UTC m=+0.125834490 container start 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:41:40 localhost podman[236465]: nova_compute Oct 14 05:41:40 localhost systemd[1]: Started nova_compute container. Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Validating config file Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying service configuration files Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:41:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33907 DF PROTO=TCP SPT=39158 DPT=9105 SEQ=374961216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619C1E90000000001030307) Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /etc/ceph Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Creating directory /etc/ceph Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/ceph Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Writing out command to execute Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:40 localhost nova_compute[236479]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:41:40 localhost nova_compute[236479]: ++ cat /run_command Oct 14 05:41:40 localhost nova_compute[236479]: + CMD=nova-compute Oct 14 05:41:40 localhost nova_compute[236479]: + ARGS= Oct 14 05:41:40 localhost nova_compute[236479]: + sudo kolla_copy_cacerts Oct 14 05:41:40 localhost nova_compute[236479]: + [[ ! -n '' ]] Oct 14 05:41:40 localhost nova_compute[236479]: + . kolla_extend_start Oct 14 05:41:40 localhost nova_compute[236479]: Running command: 'nova-compute' Oct 14 05:41:40 localhost nova_compute[236479]: + echo 'Running command: '\''nova-compute'\''' Oct 14 05:41:40 localhost nova_compute[236479]: + umask 0022 Oct 14 05:41:40 localhost nova_compute[236479]: + exec nova-compute Oct 14 05:41:41 localhost nova_compute[236479]: 2025-10-14 09:41:41.997 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:41 localhost nova_compute[236479]: 2025-10-14 09:41:41.998 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:41 localhost nova_compute[236479]: 2025-10-14 09:41:41.998 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:41:41 localhost nova_compute[236479]: 2025-10-14 09:41:41.998 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.146 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.155 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.593 2 INFO nova.virt.driver [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.710 2 INFO nova.compute.provider_config [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.719 2 WARNING nova.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.720 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.720 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.721 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.721 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.721 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.722 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.722 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.722 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.722 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.723 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.723 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.723 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.723 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.723 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.724 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.724 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.724 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.724 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.725 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.725 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.725 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.725 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.726 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] console_host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.726 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.726 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.726 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.727 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.727 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.727 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.727 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.728 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.728 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.728 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.728 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.729 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.729 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.729 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.729 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.730 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.730 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.730 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.730 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.731 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.731 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.731 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.731 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.732 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.732 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.732 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.732 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.733 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.733 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.733 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.733 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.734 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.734 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.734 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.734 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.735 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.735 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.735 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.735 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.735 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.736 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.736 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.736 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.736 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.737 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.737 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.737 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.738 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.738 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.738 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.738 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.739 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.739 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.739 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.739 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.740 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.740 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.740 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.741 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.741 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.741 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.741 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.742 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.742 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.742 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.742 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.743 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.743 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.743 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.743 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.744 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.744 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.744 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.744 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.745 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.745 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.745 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.745 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.746 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.746 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.746 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.746 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.747 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.747 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.747 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.747 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.748 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.748 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.748 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.748 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.748 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.749 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.749 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.749 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.749 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.750 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.750 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.750 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.750 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.751 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.751 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.751 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.751 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.752 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.752 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.752 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.752 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.752 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.753 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.753 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.753 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.753 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.754 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.754 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.754 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.754 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.755 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.755 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.755 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.755 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.756 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.756 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.756 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.756 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.756 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.757 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.757 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.757 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.757 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.758 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.758 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.758 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.758 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.759 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.759 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.759 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.759 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.760 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.760 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.760 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.761 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.761 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.761 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.761 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.762 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.762 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.762 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.762 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.763 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.763 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.763 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.763 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.764 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.764 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.764 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.765 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.765 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.765 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.765 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.765 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.766 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.766 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.766 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.766 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.767 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.767 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.767 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.768 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.768 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.768 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.769 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.769 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.769 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.769 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.770 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.770 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.770 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.771 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.771 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.771 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.771 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.771 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.772 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.772 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.772 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.772 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.773 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.773 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.773 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.773 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.774 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.774 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.774 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.774 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.775 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.775 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.775 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.775 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.775 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.776 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.776 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.776 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.776 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.777 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.777 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.777 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.777 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.777 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.778 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.778 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.778 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.778 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.779 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.779 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.779 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.779 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.779 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.780 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.780 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.780 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.780 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.781 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.781 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.781 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.781 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.782 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.782 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.782 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.782 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.782 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.783 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.783 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.783 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.783 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.784 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.784 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.784 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.784 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.785 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.785 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.785 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.785 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.785 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.786 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.786 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.786 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.786 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.787 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.787 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.787 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.787 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.788 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.788 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.788 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.788 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.788 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.789 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.789 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.789 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.789 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.790 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.790 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.790 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.790 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.791 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.791 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.791 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.791 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.792 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.792 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.792 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.792 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.792 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.793 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.793 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.793 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.793 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.794 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.794 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.794 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.794 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.795 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.795 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.795 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.795 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.795 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.796 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.796 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.796 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.796 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.797 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.797 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.797 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.797 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.797 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.798 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.798 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.798 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.798 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.799 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.799 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.799 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.799 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.800 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.800 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.800 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.800 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.801 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.801 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.801 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.801 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.801 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.802 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.802 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.802 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.803 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.803 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.804 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.804 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.804 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.805 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.805 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.805 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.806 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.806 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.806 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.807 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.807 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.807 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.808 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.808 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.808 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.808 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.809 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.809 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.810 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.810 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.810 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.811 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.811 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.811 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.811 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.812 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.812 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.812 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.813 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.813 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.814 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.814 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.814 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.814 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.815 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.815 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.815 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.816 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.816 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.816 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.816 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.817 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.817 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.817 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.818 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.818 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.818 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.819 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.819 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.819 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.819 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.820 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.820 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.820 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.821 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.821 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.821 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.822 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.822 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.822 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.822 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.823 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.823 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.823 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.824 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.825 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.825 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.825 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.825 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.826 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.826 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.826 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.827 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.827 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.827 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.828 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.828 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.828 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.828 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.829 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.829 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.829 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.830 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.830 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.830 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.831 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.831 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.831 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.832 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.832 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.832 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.832 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.833 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.833 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.833 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.834 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.834 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.834 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.835 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.835 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.835 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.836 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.836 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.836 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.836 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.837 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.837 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.837 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.838 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.838 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.838 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.839 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.839 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.839 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.840 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.840 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.840 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.841 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.841 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.841 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.842 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.842 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.842 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.843 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.843 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.843 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.844 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.844 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.844 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.845 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.845 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.845 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.846 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.846 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.846 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.847 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.847 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.847 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.848 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.848 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.848 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.848 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.849 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.849 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.849 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.850 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.850 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.850 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.850 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.851 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.851 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.851 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.851 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.851 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.852 2 WARNING oslo_config.cfg [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 14 05:41:42 localhost nova_compute[236479]: live_migration_uri is deprecated for removal in favor of two other options that Oct 14 05:41:42 localhost nova_compute[236479]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 14 05:41:42 localhost nova_compute[236479]: and ``live_migration_inbound_addr`` respectively. Oct 14 05:41:42 localhost nova_compute[236479]: ). Its value may be silently ignored in the future.#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.852 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.852 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.852 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.853 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.853 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.853 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.853 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.853 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.854 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.854 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.854 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.854 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.854 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.855 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rbd_secret_uuid = fcadf6e2-9176-5818-a8d0-37b19acf8eaf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.856 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.856 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.856 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.856 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.856 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.857 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.857 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.857 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.857 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.857 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.858 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.858 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.858 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.858 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.858 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.859 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.859 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.859 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.859 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.860 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.860 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.860 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.860 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.860 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.861 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.861 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.861 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.861 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.862 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.862 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.862 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.862 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.862 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.863 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.863 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.863 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.863 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.863 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.864 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.865 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.865 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.865 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.865 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.865 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.866 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.866 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.866 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.866 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.866 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.867 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.868 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.868 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.868 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.868 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.868 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.869 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.869 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.869 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.869 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.870 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.871 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.871 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.871 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.871 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.871 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.872 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.873 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.874 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.874 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.874 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.874 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.874 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.875 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.875 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.875 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.875 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.875 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.876 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.876 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.876 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.876 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.876 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.877 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.877 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.877 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.877 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.877 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.878 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.878 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.878 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.878 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.878 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.879 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.879 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.879 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.879 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.880 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.880 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.880 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.880 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.880 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.881 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.881 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.881 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.881 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.881 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.882 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.882 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.882 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.882 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.882 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.883 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.883 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.883 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.883 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.883 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.884 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.885 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.885 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.885 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.885 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.885 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.886 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.887 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.888 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.889 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.890 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.891 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.892 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.893 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.894 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.895 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.896 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.897 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.898 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.899 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.900 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.901 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.902 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.903 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.904 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.905 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.906 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.907 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.908 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.909 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.910 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.911 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.912 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.913 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.914 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.915 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.916 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.917 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.918 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.919 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.920 2 DEBUG oslo_service.service [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.921 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.934 2 INFO nova.virt.node [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.934 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.935 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.935 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.935 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.943 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.945 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.946 2 INFO nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Connection event '1' reason 'None'#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.953 2 INFO nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Libvirt host capabilities Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: adf6dc17-eeaa-420b-a893-ea8f9e53b331 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: x86_64 Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome-v4 Oct 14 05:41:42 localhost nova_compute[236479]: AMD Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: tcp Oct 14 05:41:42 localhost nova_compute[236479]: rdma Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: 16116612 Oct 14 05:41:42 localhost nova_compute[236479]: 4029153 Oct 14 05:41:42 localhost nova_compute[236479]: 0 Oct 14 05:41:42 localhost nova_compute[236479]: 0 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: selinux Oct 14 05:41:42 localhost nova_compute[236479]: 0 Oct 14 05:41:42 localhost nova_compute[236479]: system_u:system_r:svirt_t:s0 Oct 14 05:41:42 localhost nova_compute[236479]: system_u:system_r:svirt_tcg_t:s0 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: dac Oct 14 05:41:42 localhost nova_compute[236479]: 0 Oct 14 05:41:42 localhost nova_compute[236479]: +107:+107 Oct 14 05:41:42 localhost nova_compute[236479]: +107:+107 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: hvm Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: 32 Oct 14 05:41:42 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:42 localhost nova_compute[236479]: pc-i440fx-rhel7.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: q35 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.4.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.5.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.3.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel7.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.4.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.2.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.2.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.0.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.0.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.1.0 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: hvm Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: 64 Oct 14 05:41:42 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:42 localhost nova_compute[236479]: pc-i440fx-rhel7.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: q35 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.4.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.5.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.3.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel7.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.4.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.2.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.2.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel9.0.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.0.0 Oct 14 05:41:42 localhost nova_compute[236479]: pc-q35-rhel8.1.0 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: #033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.961 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.962 2 DEBUG nova.virt.libvirt.volume.mount [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 14 05:41:42 localhost nova_compute[236479]: 2025-10-14 09:41:42.966 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:42 localhost nova_compute[236479]: kvm Oct 14 05:41:42 localhost nova_compute[236479]: pc-i440fx-rhel7.6.0 Oct 14 05:41:42 localhost nova_compute[236479]: i686 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: rom Oct 14 05:41:42 localhost nova_compute[236479]: pflash Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: yes Oct 14 05:41:42 localhost nova_compute[236479]: no Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: no Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: on Oct 14 05:41:42 localhost nova_compute[236479]: off Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: on Oct 14 05:41:42 localhost nova_compute[236479]: off Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:42 localhost nova_compute[236479]: AMD Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: 486 Oct 14 05:41:42 localhost nova_compute[236479]: 486-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-noTSX Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-noTSX-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Broadwell-v4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-noTSX Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-v4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cascadelake-Server-v5 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Conroe Oct 14 05:41:42 localhost nova_compute[236479]: Conroe-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Cooperlake Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cooperlake-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Cooperlake-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Denverton Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Denverton-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Denverton-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Denverton-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Dhyana Oct 14 05:41:42 localhost nova_compute[236479]: Dhyana-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Dhyana-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Genoa Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Genoa-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-IBPB Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Milan Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Milan-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Milan-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-Rome-v4 Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-v1 Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-v2 Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: EPYC-v4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: GraniteRapids Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: GraniteRapids-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: GraniteRapids-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-noTSX Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-noTSX-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Haswell-v4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-noTSX Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v3 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v5 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v6 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Icelake-Server-v7 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: IvyBridge Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: IvyBridge-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: IvyBridge-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: IvyBridge-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: KnightsMill Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: KnightsMill-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Nehalem Oct 14 05:41:42 localhost nova_compute[236479]: Nehalem-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: Nehalem-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Nehalem-v2 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G1 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G1-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G2 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G2-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G3 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G3-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G4 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G4-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G5 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Opteron_G5-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Penryn Oct 14 05:41:42 localhost nova_compute[236479]: Penryn-v1 Oct 14 05:41:42 localhost nova_compute[236479]: SandyBridge Oct 14 05:41:42 localhost nova_compute[236479]: SandyBridge-IBRS Oct 14 05:41:42 localhost nova_compute[236479]: SandyBridge-v1 Oct 14 05:41:42 localhost nova_compute[236479]: SandyBridge-v2 Oct 14 05:41:42 localhost nova_compute[236479]: SapphireRapids Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: SapphireRapids-v1 Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:42 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Westmere Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v2 Oct 14 05:41:43 localhost nova_compute[236479]: athlon Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: athlon-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: kvm32 Oct 14 05:41:43 localhost nova_compute[236479]: kvm32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: n270 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: n270-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pentium Oct 14 05:41:43 localhost nova_compute[236479]: pentium-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: phenom Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: phenom-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu32 Oct 14 05:41:43 localhost nova_compute[236479]: qemu32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: file Oct 14 05:41:43 localhost nova_compute[236479]: anonymous Oct 14 05:41:43 localhost nova_compute[236479]: memfd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: disk Oct 14 05:41:43 localhost nova_compute[236479]: cdrom Oct 14 05:41:43 localhost nova_compute[236479]: floppy Oct 14 05:41:43 localhost nova_compute[236479]: lun Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: ide Oct 14 05:41:43 localhost nova_compute[236479]: fdc Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: sata Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: vnc Oct 14 05:41:43 localhost nova_compute[236479]: egl-headless Oct 14 05:41:43 localhost nova_compute[236479]: dbus Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: subsystem Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: mandatory Oct 14 05:41:43 localhost nova_compute[236479]: requisite Oct 14 05:41:43 localhost nova_compute[236479]: optional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: pci Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: random Oct 14 05:41:43 localhost nova_compute[236479]: egd Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: path Oct 14 05:41:43 localhost nova_compute[236479]: handle Oct 14 05:41:43 localhost nova_compute[236479]: virtiofs Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: tpm-tis Oct 14 05:41:43 localhost nova_compute[236479]: tpm-crb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: emulator Oct 14 05:41:43 localhost nova_compute[236479]: external Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 2.0 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pty Oct 14 05:41:43 localhost nova_compute[236479]: unix Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: passt Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: isa Oct 14 05:41:43 localhost nova_compute[236479]: hyperv Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: relaxed Oct 14 05:41:43 localhost nova_compute[236479]: vapic Oct 14 05:41:43 localhost nova_compute[236479]: spinlocks Oct 14 05:41:43 localhost nova_compute[236479]: vpindex Oct 14 05:41:43 localhost nova_compute[236479]: runtime Oct 14 05:41:43 localhost nova_compute[236479]: synic Oct 14 05:41:43 localhost nova_compute[236479]: stimer Oct 14 05:41:43 localhost nova_compute[236479]: reset Oct 14 05:41:43 localhost nova_compute[236479]: vendor_id Oct 14 05:41:43 localhost nova_compute[236479]: frequencies Oct 14 05:41:43 localhost nova_compute[236479]: reenlightenment Oct 14 05:41:43 localhost nova_compute[236479]: tlbflush Oct 14 05:41:43 localhost nova_compute[236479]: ipi Oct 14 05:41:43 localhost nova_compute[236479]: avic Oct 14 05:41:43 localhost nova_compute[236479]: emsr_bitmap Oct 14 05:41:43 localhost nova_compute[236479]: xmm_input Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:42.988 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:43 localhost nova_compute[236479]: kvm Oct 14 05:41:43 localhost nova_compute[236479]: pc-q35-rhel9.6.0 Oct 14 05:41:43 localhost nova_compute[236479]: i686 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: rom Oct 14 05:41:43 localhost nova_compute[236479]: pflash Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: yes Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: AMD Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 486 Oct 14 05:41:43 localhost nova_compute[236479]: 486-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Conroe Oct 14 05:41:43 localhost nova_compute[236479]: Conroe-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-IBPB Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v4 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v1 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v2 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v6 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v7 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Penryn Oct 14 05:41:43 localhost nova_compute[236479]: Penryn-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Westmere Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v2 Oct 14 05:41:43 localhost nova_compute[236479]: athlon Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: athlon-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: kvm32 Oct 14 05:41:43 localhost nova_compute[236479]: kvm32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: n270 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: n270-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pentium Oct 14 05:41:43 localhost nova_compute[236479]: pentium-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: phenom Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: phenom-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu32 Oct 14 05:41:43 localhost nova_compute[236479]: qemu32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: file Oct 14 05:41:43 localhost nova_compute[236479]: anonymous Oct 14 05:41:43 localhost nova_compute[236479]: memfd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: disk Oct 14 05:41:43 localhost nova_compute[236479]: cdrom Oct 14 05:41:43 localhost nova_compute[236479]: floppy Oct 14 05:41:43 localhost nova_compute[236479]: lun Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: fdc Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: sata Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: vnc Oct 14 05:41:43 localhost nova_compute[236479]: egl-headless Oct 14 05:41:43 localhost nova_compute[236479]: dbus Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: subsystem Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: mandatory Oct 14 05:41:43 localhost nova_compute[236479]: requisite Oct 14 05:41:43 localhost nova_compute[236479]: optional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: pci Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: random Oct 14 05:41:43 localhost nova_compute[236479]: egd Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: path Oct 14 05:41:43 localhost nova_compute[236479]: handle Oct 14 05:41:43 localhost nova_compute[236479]: virtiofs Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: tpm-tis Oct 14 05:41:43 localhost nova_compute[236479]: tpm-crb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: emulator Oct 14 05:41:43 localhost nova_compute[236479]: external Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 2.0 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pty Oct 14 05:41:43 localhost nova_compute[236479]: unix Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: passt Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: isa Oct 14 05:41:43 localhost nova_compute[236479]: hyperv Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: relaxed Oct 14 05:41:43 localhost nova_compute[236479]: vapic Oct 14 05:41:43 localhost nova_compute[236479]: spinlocks Oct 14 05:41:43 localhost nova_compute[236479]: vpindex Oct 14 05:41:43 localhost nova_compute[236479]: runtime Oct 14 05:41:43 localhost nova_compute[236479]: synic Oct 14 05:41:43 localhost nova_compute[236479]: stimer Oct 14 05:41:43 localhost nova_compute[236479]: reset Oct 14 05:41:43 localhost nova_compute[236479]: vendor_id Oct 14 05:41:43 localhost nova_compute[236479]: frequencies Oct 14 05:41:43 localhost nova_compute[236479]: reenlightenment Oct 14 05:41:43 localhost nova_compute[236479]: tlbflush Oct 14 05:41:43 localhost nova_compute[236479]: ipi Oct 14 05:41:43 localhost nova_compute[236479]: avic Oct 14 05:41:43 localhost nova_compute[236479]: emsr_bitmap Oct 14 05:41:43 localhost nova_compute[236479]: xmm_input Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:42.997 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.000 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:43 localhost nova_compute[236479]: kvm Oct 14 05:41:43 localhost nova_compute[236479]: pc-i440fx-rhel7.6.0 Oct 14 05:41:43 localhost nova_compute[236479]: x86_64 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: rom Oct 14 05:41:43 localhost nova_compute[236479]: pflash Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: yes Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: AMD Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 486 Oct 14 05:41:43 localhost nova_compute[236479]: 486-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Conroe Oct 14 05:41:43 localhost nova_compute[236479]: Conroe-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-IBPB Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v4 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v1 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v2 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v6 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v7 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Penryn Oct 14 05:41:43 localhost nova_compute[236479]: Penryn-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Westmere Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v2 Oct 14 05:41:43 localhost nova_compute[236479]: athlon Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: athlon-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: kvm32 Oct 14 05:41:43 localhost nova_compute[236479]: kvm32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: n270 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: n270-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pentium Oct 14 05:41:43 localhost nova_compute[236479]: pentium-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: phenom Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: phenom-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu32 Oct 14 05:41:43 localhost nova_compute[236479]: qemu32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: file Oct 14 05:41:43 localhost nova_compute[236479]: anonymous Oct 14 05:41:43 localhost nova_compute[236479]: memfd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: disk Oct 14 05:41:43 localhost nova_compute[236479]: cdrom Oct 14 05:41:43 localhost nova_compute[236479]: floppy Oct 14 05:41:43 localhost nova_compute[236479]: lun Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: ide Oct 14 05:41:43 localhost nova_compute[236479]: fdc Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: sata Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: vnc Oct 14 05:41:43 localhost nova_compute[236479]: egl-headless Oct 14 05:41:43 localhost nova_compute[236479]: dbus Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: subsystem Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: mandatory Oct 14 05:41:43 localhost nova_compute[236479]: requisite Oct 14 05:41:43 localhost nova_compute[236479]: optional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: pci Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: random Oct 14 05:41:43 localhost nova_compute[236479]: egd Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: path Oct 14 05:41:43 localhost nova_compute[236479]: handle Oct 14 05:41:43 localhost nova_compute[236479]: virtiofs Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: tpm-tis Oct 14 05:41:43 localhost nova_compute[236479]: tpm-crb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: emulator Oct 14 05:41:43 localhost nova_compute[236479]: external Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 2.0 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pty Oct 14 05:41:43 localhost nova_compute[236479]: unix Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: passt Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: isa Oct 14 05:41:43 localhost nova_compute[236479]: hyperv Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: relaxed Oct 14 05:41:43 localhost nova_compute[236479]: vapic Oct 14 05:41:43 localhost nova_compute[236479]: spinlocks Oct 14 05:41:43 localhost nova_compute[236479]: vpindex Oct 14 05:41:43 localhost nova_compute[236479]: runtime Oct 14 05:41:43 localhost nova_compute[236479]: synic Oct 14 05:41:43 localhost nova_compute[236479]: stimer Oct 14 05:41:43 localhost nova_compute[236479]: reset Oct 14 05:41:43 localhost nova_compute[236479]: vendor_id Oct 14 05:41:43 localhost nova_compute[236479]: frequencies Oct 14 05:41:43 localhost nova_compute[236479]: reenlightenment Oct 14 05:41:43 localhost nova_compute[236479]: tlbflush Oct 14 05:41:43 localhost nova_compute[236479]: ipi Oct 14 05:41:43 localhost nova_compute[236479]: avic Oct 14 05:41:43 localhost nova_compute[236479]: emsr_bitmap Oct 14 05:41:43 localhost nova_compute[236479]: xmm_input Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.051 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/libexec/qemu-kvm Oct 14 05:41:43 localhost nova_compute[236479]: kvm Oct 14 05:41:43 localhost nova_compute[236479]: pc-q35-rhel9.6.0 Oct 14 05:41:43 localhost nova_compute[236479]: x86_64 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: efi Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 14 05:41:43 localhost nova_compute[236479]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: rom Oct 14 05:41:43 localhost nova_compute[236479]: pflash Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: yes Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: yes Oct 14 05:41:43 localhost nova_compute[236479]: no Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: on Oct 14 05:41:43 localhost nova_compute[236479]: off Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: AMD Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 486 Oct 14 05:41:43 localhost nova_compute[236479]: 486-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Broadwell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cascadelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Conroe Oct 14 05:41:43 localhost nova_compute[236479]: Conroe-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Cooperlake-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Denverton-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Dhyana-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Genoa-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-IBPB Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Milan-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-Rome-v4 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v1 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v2 Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: EPYC-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: GraniteRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Haswell-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-noTSX Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v6 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Icelake-Server-v7 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: IvyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: KnightsMill-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Nehalem-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G1-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G4-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Opteron_G5-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Penryn Oct 14 05:41:43 localhost nova_compute[236479]: Penryn-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: SandyBridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SapphireRapids-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: SierraForest-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Client-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-noTSX-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Skylake-Server-v5 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v2 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v3 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Snowridge-v4 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Westmere Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-IBRS Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Westmere-v2 Oct 14 05:41:43 localhost nova_compute[236479]: athlon Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: athlon-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: core2duo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: coreduo-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: kvm32 Oct 14 05:41:43 localhost nova_compute[236479]: kvm32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64 Oct 14 05:41:43 localhost nova_compute[236479]: kvm64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: n270 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: n270-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pentium Oct 14 05:41:43 localhost nova_compute[236479]: pentium-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2 Oct 14 05:41:43 localhost nova_compute[236479]: pentium2-v1 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3 Oct 14 05:41:43 localhost nova_compute[236479]: pentium3-v1 Oct 14 05:41:43 localhost nova_compute[236479]: phenom Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: phenom-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu32 Oct 14 05:41:43 localhost nova_compute[236479]: qemu32-v1 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64 Oct 14 05:41:43 localhost nova_compute[236479]: qemu64-v1 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: file Oct 14 05:41:43 localhost nova_compute[236479]: anonymous Oct 14 05:41:43 localhost nova_compute[236479]: memfd Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: disk Oct 14 05:41:43 localhost nova_compute[236479]: cdrom Oct 14 05:41:43 localhost nova_compute[236479]: floppy Oct 14 05:41:43 localhost nova_compute[236479]: lun Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: fdc Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: sata Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: vnc Oct 14 05:41:43 localhost nova_compute[236479]: egl-headless Oct 14 05:41:43 localhost nova_compute[236479]: dbus Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: subsystem Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: mandatory Oct 14 05:41:43 localhost nova_compute[236479]: requisite Oct 14 05:41:43 localhost nova_compute[236479]: optional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: pci Oct 14 05:41:43 localhost nova_compute[236479]: scsi Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: virtio Oct 14 05:41:43 localhost nova_compute[236479]: virtio-transitional Oct 14 05:41:43 localhost nova_compute[236479]: virtio-non-transitional Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: random Oct 14 05:41:43 localhost nova_compute[236479]: egd Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: path Oct 14 05:41:43 localhost nova_compute[236479]: handle Oct 14 05:41:43 localhost nova_compute[236479]: virtiofs Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: tpm-tis Oct 14 05:41:43 localhost nova_compute[236479]: tpm-crb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: emulator Oct 14 05:41:43 localhost nova_compute[236479]: external Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: 2.0 Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: usb Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: pty Oct 14 05:41:43 localhost nova_compute[236479]: unix Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: qemu Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: builtin Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: default Oct 14 05:41:43 localhost nova_compute[236479]: passt Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: isa Oct 14 05:41:43 localhost nova_compute[236479]: hyperv Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: relaxed Oct 14 05:41:43 localhost nova_compute[236479]: vapic Oct 14 05:41:43 localhost nova_compute[236479]: spinlocks Oct 14 05:41:43 localhost nova_compute[236479]: vpindex Oct 14 05:41:43 localhost nova_compute[236479]: runtime Oct 14 05:41:43 localhost nova_compute[236479]: synic Oct 14 05:41:43 localhost nova_compute[236479]: stimer Oct 14 05:41:43 localhost nova_compute[236479]: reset Oct 14 05:41:43 localhost nova_compute[236479]: vendor_id Oct 14 05:41:43 localhost nova_compute[236479]: frequencies Oct 14 05:41:43 localhost nova_compute[236479]: reenlightenment Oct 14 05:41:43 localhost nova_compute[236479]: tlbflush Oct 14 05:41:43 localhost nova_compute[236479]: ipi Oct 14 05:41:43 localhost nova_compute[236479]: avic Oct 14 05:41:43 localhost nova_compute[236479]: emsr_bitmap Oct 14 05:41:43 localhost nova_compute[236479]: xmm_input Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: Oct 14 05:41:43 localhost nova_compute[236479]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.095 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.095 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.095 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.096 2 INFO nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Secure Boot support detected#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.098 2 INFO nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.098 2 INFO nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.109 2 DEBUG nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.133 2 INFO nova.virt.node [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.151 2 DEBUG nova.compute.manager [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Verified node ebb6de71-88e5-4477-92fc-f2b9532f7fcd matches my host np0005486731.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.182 2 INFO nova.compute.manager [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 14 05:41:43 localhost python3.9[236626]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 14 05:41:43 localhost systemd[1]: Started libpod-conmon-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope. Oct 14 05:41:43 localhost systemd[1]: Started libcrun container. Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.412 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.412 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.413 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.413 2 DEBUG nova.compute.resource_tracker [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.414 2 DEBUG oslo_concurrency.processutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 14 05:41:43 localhost podman[236651]: 2025-10-14 09:41:43.42421018 +0000 UTC m=+0.090641910 container init 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 05:41:43 localhost podman[236651]: 2025-10-14 09:41:43.434332124 +0000 UTC m=+0.100763864 container start 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:41:43 localhost python3.9[236626]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Applying nova statedir ownership Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/9469aff02825a9e3dcdb3ceeb358f8d540dc07c8b6e9cd975f170399051d29c3 Oct 14 05:41:43 localhost nova_compute_init[236671]: INFO:nova_statedir:Nova statedir ownership complete Oct 14 05:41:43 localhost systemd[1]: libpod-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope: Deactivated successfully. Oct 14 05:41:43 localhost podman[236672]: 2025-10-14 09:41:43.509248611 +0000 UTC m=+0.055048178 container died 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:41:43 localhost podman[236683]: 2025-10-14 09:41:43.561353903 +0000 UTC m=+0.071579762 container cleanup 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 05:41:43 localhost systemd[1]: libpod-conmon-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope: Deactivated successfully. Oct 14 05:41:43 localhost nova_compute[236479]: 2025-10-14 09:41:43.832 2 DEBUG oslo_concurrency.processutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.045 2 WARNING nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.047 2 DEBUG nova.compute.resource_tracker [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13544MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.047 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.048 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:41:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38252 DF PROTO=TCP SPT=39264 DPT=9101 SEQ=945735126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619D0B30000000001030307) Oct 14 05:41:44 localhost systemd[1]: session-54.scope: Deactivated successfully. Oct 14 05:41:44 localhost systemd[1]: session-54.scope: Consumed 2min 36.196s CPU time. Oct 14 05:41:44 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Oct 14 05:41:44 localhost systemd-logind[760]: Removed session 54. Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.350 2 DEBUG nova.compute.resource_tracker [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.351 2 DEBUG nova.compute.resource_tracker [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.366 2 DEBUG nova.scheduler.client.report [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 05:41:44 localhost systemd[1]: var-lib-containers-storage-overlay-02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01-merged.mount: Deactivated successfully. Oct 14 05:41:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20-userdata-shm.mount: Deactivated successfully. Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.571 2 DEBUG nova.scheduler.client.report [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.571 2 DEBUG nova.compute.provider_tree [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.590 2 DEBUG nova.scheduler.client.report [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.613 2 DEBUG nova.scheduler.client.report [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NODE,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 05:41:44 localhost nova_compute[236479]: 2025-10-14 09:41:44.631 2 DEBUG oslo_concurrency.processutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.096 2 DEBUG oslo_concurrency.processutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.101 2 DEBUG nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 14 05:41:45 localhost nova_compute[236479]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.101 2 INFO nova.virt.libvirt.host [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.103 2 DEBUG nova.compute.provider_tree [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.103 2 DEBUG nova.virt.libvirt.driver [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.134 2 DEBUG nova.scheduler.client.report [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.163 2 DEBUG nova.compute.resource_tracker [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.164 2 DEBUG oslo_concurrency.lockutils [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.164 2 DEBUG nova.service [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.194 2 DEBUG nova.service [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 14 05:41:45 localhost nova_compute[236479]: 2025-10-14 09:41:45.195 2 DEBUG nova.servicegroup.drivers.db [None req-cd3852b1-7e79-4632-8500-515a3f9d210a - - - - - -] DB_Driver: join new ServiceGroup member np0005486731.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 14 05:41:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38254 DF PROTO=TCP SPT=39264 DPT=9101 SEQ=945735126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619DCA90000000001030307) Oct 14 05:41:50 localhost sshd[236767]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:41:50 localhost systemd-logind[760]: New session 57 of user zuul. Oct 14 05:41:50 localhost systemd[1]: Started Session 57 of User zuul. Oct 14 05:41:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38255 DF PROTO=TCP SPT=39264 DPT=9101 SEQ=945735126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619EC690000000001030307) Oct 14 05:41:51 localhost python3.9[236878]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:41:53 localhost python3.9[236992]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:41:53 localhost systemd[1]: Reloading. Oct 14 05:41:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8383 DF PROTO=TCP SPT=53566 DPT=9102 SEQ=171356614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619F61E0000000001030307) Oct 14 05:41:53 localhost systemd-rc-local-generator[237027]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:41:53 localhost systemd-sysv-generator[237030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:41:53 localhost podman[236994]: 2025-10-14 09:41:53.86878837 +0000 UTC m=+0.112927182 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:41:53 localhost podman[236994]: 2025-10-14 09:41:53.879146571 +0000 UTC m=+0.123285343 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:41:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:41:54 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:41:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8384 DF PROTO=TCP SPT=53566 DPT=9102 SEQ=171356614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7619FA290000000001030307) Oct 14 05:41:54 localhost python3.9[237157]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:41:55 localhost network[237174]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:41:55 localhost network[237175]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:41:55 localhost network[237176]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:41:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:41:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:41:56 localhost podman[237218]: 2025-10-14 09:41:56.326196773 +0000 UTC m=+0.087112577 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:41:56 localhost podman[237218]: 2025-10-14 09:41:56.33909133 +0000 UTC m=+0.100007134 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:41:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:41:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:41:57.598 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:41:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:41:57.598 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:41:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:41:57.598 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:41:59 localhost python3.9[237431]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:42:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5007 DF PROTO=TCP SPT=59888 DPT=9100 SEQ=3543066333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A10030000000001030307) Oct 14 05:42:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5829 DF PROTO=TCP SPT=58416 DPT=9882 SEQ=1858786310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A10A90000000001030307) Oct 14 05:42:01 localhost python3.9[237542]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:01 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Oct 14 05:42:01 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:42:01 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:42:01 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:42:02 localhost python3.9[237653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:03 localhost python3.9[237763]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:42:03 localhost podman[237766]: 2025-10-14 09:42:03.174329875 +0000 UTC m=+0.079671562 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:42:03 localhost podman[237766]: 2025-10-14 09:42:03.241155971 +0000 UTC m=+0.146497678 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 05:42:03 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:42:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5009 DF PROTO=TCP SPT=59888 DPT=9100 SEQ=3543066333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A1C2A0000000001030307) Oct 14 05:42:04 localhost python3.9[237898]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:42:05 localhost python3.9[238008]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:42:05 localhost systemd[1]: Reloading. Oct 14 05:42:05 localhost systemd-rc-local-generator[238037]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:05 localhost systemd-sysv-generator[238040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:06 localhost python3.9[238155]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:42:07 localhost python3.9[238266]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5010 DF PROTO=TCP SPT=59888 DPT=9100 SEQ=3543066333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A2BE90000000001030307) Oct 14 05:42:07 localhost python3.9[238374]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:42:08 localhost python3.9[238484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:09 localhost python3.9[238570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434928.1780906-359-229526622780920/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=055fa19b65f9037195e9940c9abee94df421f9ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:10 localhost python3.9[238680]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Oct 14 05:42:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47962 DF PROTO=TCP SPT=44620 DPT=9105 SEQ=2584836426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A372A0000000001030307) Oct 14 05:42:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:42:10 localhost podman[238698]: 2025-10-14 09:42:10.553918936 +0000 UTC m=+0.089789147 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:42:10 localhost podman[238698]: 2025-10-14 09:42:10.599655272 +0000 UTC m=+0.135525433 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:42:10 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:42:12 localhost python3.9[238808]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Oct 14 05:42:12 localhost python3.9[238919]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 14 05:42:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2503 DF PROTO=TCP SPT=32806 DPT=9101 SEQ=3776575742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A45E30000000001030307) Oct 14 05:42:14 localhost python3.9[239035]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486731.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 14 05:42:15 localhost nova_compute[236479]: 2025-10-14 09:42:15.196 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:15 localhost nova_compute[236479]: 2025-10-14 09:42:15.219 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:15 localhost python3.9[239151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:16 localhost python3.9[239237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760434935.468759-563-1844506692559/.source.conf _original_basename=ceilometer.conf follow=False checksum=035860cf668b88822e0cefeecfa174979afca855 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:17 localhost python3.9[239399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2505 DF PROTO=TCP SPT=32806 DPT=9101 SEQ=3776575742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A51E90000000001030307) Oct 14 05:42:17 localhost python3.9[239499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760434936.567221-563-29606360769938/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:18 localhost python3.9[239625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:18 localhost python3.9[239711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760434937.6695883-563-238898901066308/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:19 localhost python3.9[239819]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:42:20 localhost python3.9[239927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:42:20 localhost python3.9[240035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2506 DF PROTO=TCP SPT=32806 DPT=9101 SEQ=3776575742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A61A90000000001030307) Oct 14 05:42:21 localhost python3.9[240121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434940.4151838-740-183654763899632/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:22 localhost python3.9[240229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:22 localhost python3.9[240284]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:23 localhost python3.9[240392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:23 localhost python3.9[240478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434942.7139266-740-136441453775997/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49117 DF PROTO=TCP SPT=42592 DPT=9102 SEQ=204526529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A6B4E0000000001030307) Oct 14 05:42:24 localhost python3.9[240586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:42:24 localhost systemd[1]: tmp-crun.hyTE3y.mount: Deactivated successfully. Oct 14 05:42:24 localhost podman[240620]: 2025-10-14 09:42:24.566497899 +0000 UTC m=+0.096077872 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid) Oct 14 05:42:24 localhost podman[240620]: 2025-10-14 09:42:24.602223172 +0000 UTC m=+0.131803105 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid) Oct 14 05:42:24 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:42:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49118 DF PROTO=TCP SPT=42592 DPT=9102 SEQ=204526529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A6F690000000001030307) Oct 14 05:42:24 localhost python3.9[240691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434943.8797998-740-180669698841351/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:25 localhost python3.9[240799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:26 localhost python3.9[240885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434945.021329-740-276933619510347/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:42:26 localhost podman[240965]: 2025-10-14 09:42:26.541290601 +0000 UTC m=+0.079891259 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 05:42:26 localhost podman[240965]: 2025-10-14 09:42:26.579771876 +0000 UTC m=+0.118372534 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 05:42:26 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:42:26 localhost python3.9[241008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:27 localhost python3.9[241098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434946.226538-740-18535894361373/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:27 localhost python3.9[241206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:28 localhost python3.9[241292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434947.4505568-740-152160399858242/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:29 localhost python3.9[241400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:29 localhost python3.9[241486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434948.6185467-740-75923615486411/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:30 localhost python3.9[241594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6321 DF PROTO=TCP SPT=51466 DPT=9100 SEQ=2617122253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A85330000000001030307) Oct 14 05:42:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31312 DF PROTO=TCP SPT=42536 DPT=9882 SEQ=1737747232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A85D90000000001030307) Oct 14 05:42:30 localhost python3.9[241680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434949.774949-740-44677719179701/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:31 localhost python3.9[241788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:31 localhost python3.9[241874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434950.8349879-740-79273405807478/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:32 localhost python3.9[241982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:32 localhost python3.9[242068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760434951.9697335-740-27494356719693/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:42:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6323 DF PROTO=TCP SPT=51466 DPT=9100 SEQ=2617122253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761A91290000000001030307) Oct 14 05:42:33 localhost podman[242135]: 2025-10-14 09:42:33.55095396 +0000 UTC m=+0.082811425 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 05:42:33 localhost podman[242135]: 2025-10-14 09:42:33.597112434 +0000 UTC m=+0.128969839 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:42:33 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:42:33 localhost python3.9[242203]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:34 localhost python3.9[242313]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:42:34 localhost systemd[1]: Reloading. Oct 14 05:42:34 localhost systemd-sysv-generator[242346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:34 localhost systemd-rc-local-generator[242343]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:35 localhost systemd[1]: Listening on Podman API Socket. Oct 14 05:42:35 localhost python3.9[242464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:36 localhost python3.9[242552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434955.4407592-1256-193885431498580/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:36 localhost python3.9[242607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6324 DF PROTO=TCP SPT=51466 DPT=9100 SEQ=2617122253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AA0E90000000001030307) Oct 14 05:42:37 localhost python3.9[242695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434955.4407592-1256-193885431498580/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:38 localhost python3.9[242805]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Oct 14 05:42:39 localhost python3.9[242915]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:42:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48849 DF PROTO=TCP SPT=56168 DPT=9105 SEQ=2931115432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AAC290000000001030307) Oct 14 05:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:42:40 localhost podman[243025]: 2025-10-14 09:42:40.866624528 +0000 UTC m=+0.085450653 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 05:42:40 localhost podman[243025]: 2025-10-14 09:42:40.877203748 +0000 UTC m=+0.096029813 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 14 05:42:40 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:42:41 localhost python3[243026]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:42:41 localhost python3[243026]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "ff8aaa87a0dadf978d112c753603163797c5ab8a31d9fdfbc1412a1a3cc6baaa",#012 "Digest": "sha256:fdfe6c13298281d9bde0044bcf6e037d1a31c741234642f0584858e76761296b",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:fdfe6c13298281d9bde0044bcf6e037d1a31c741234642f0584858e76761296b"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-14T06:21:17.025659624Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505004291,#012 "VirtualSize": 505004291,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/56898ab6d39b47764ac69f563001cff1a6e38a16fd0080c65298dff54892d790/diff:/var/lib/containers/storage/overlay/1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec/diff:/var/lib/containers/storage/overlay/0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2896905ce9321c1f2feb1f3ada413e86eda3444455358ab965478a041351b392",#012 "sha256:f640179b0564dc7abbe22bd39fc8810d5bbb8e54094fe7ebc5b3c45b658c4983",#012 "sha256:a244c51d91c7fa48dd864b4fedb26f2afb3cd16eb13faecea61eec45f3182851",#012 "sha256:4da4e1be651faf4cb682c510a475353c690bc8308e24a4b892f317b994e706e4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-14T06:08:54.969219151Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969253522Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969285133Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969308103Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969342284Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969363945Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:55.340499198Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:32.389605838Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Oct 14 05:42:41 localhost podman[243092]: 2025-10-14 09:42:41.411422856 +0000 UTC m=+0.093388124 container remove 1069dbc2aaacebd348284164e9eb4c6b426e4ba17acad40d7eda33404731d6c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6fab081f94b3dd479fa1fef3dbed1d07'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 14 05:42:41 localhost python3[243026]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Oct 14 05:42:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:42:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5658 writes, 25K keys, 5658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5658 writes, 708 syncs, 7.99 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:42:41 localhost podman[243103]: Oct 14 05:42:41 localhost podman[243103]: 2025-10-14 09:42:41.522590866 +0000 UTC m=+0.090899398 container create 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:42:41 localhost podman[243103]: 2025-10-14 09:42:41.478844554 +0000 UTC m=+0.047153166 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 14 05:42:41 localhost python3[243026]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.166 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.167 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.167 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.167 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.179 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.180 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.180 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.180 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.181 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.181 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.181 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.181 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.181 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.204 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.205 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.205 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.205 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.205 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:42:42 localhost python3.9[243250]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.664 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.810 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.811 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13535MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.811 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.811 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.869 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.869 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:42:42 localhost nova_compute[236479]: 2025-10-14 09:42:42.895 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:42:43 localhost python3.9[243385]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:43 localhost nova_compute[236479]: 2025-10-14 09:42:43.307 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:42:43 localhost nova_compute[236479]: 2025-10-14 09:42:43.314 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:42:43 localhost nova_compute[236479]: 2025-10-14 09:42:43.334 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:42:43 localhost nova_compute[236479]: 2025-10-14 09:42:43.335 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:42:43 localhost nova_compute[236479]: 2025-10-14 09:42:43.336 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:42:43 localhost python3.9[243515]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434963.170912-1448-38220226524281/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64599 DF PROTO=TCP SPT=56794 DPT=9101 SEQ=964255269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761ABB130000000001030307) Oct 14 05:42:44 localhost python3.9[243570]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:42:44 localhost systemd[1]: Reloading. Oct 14 05:42:44 localhost systemd-rc-local-generator[243595]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:44 localhost systemd-sysv-generator[243602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:45 localhost python3.9[243662]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:42:45 localhost systemd[1]: Reloading. Oct 14 05:42:45 localhost systemd-sysv-generator[243693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:45 localhost systemd-rc-local-generator[243689]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:42:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 4839 writes, 21K keys, 4839 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4839 writes, 659 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:42:46 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 14 05:42:46 localhost systemd[1]: tmp-crun.kerBWS.mount: Deactivated successfully. Oct 14 05:42:46 localhost systemd[1]: Started libcrun container. Oct 14 05:42:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580a94d220f8f7b88b6d35dc7cc2a43d8f9dc291f1fe97c534ccb4743deb8c0a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 14 05:42:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580a94d220f8f7b88b6d35dc7cc2a43d8f9dc291f1fe97c534ccb4743deb8c0a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 14 05:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:42:46 localhost podman[243702]: 2025-10-14 09:42:46.315364416 +0000 UTC m=+0.135422684 container init 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + sudo -E kolla_set_configs Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: sudo: unable to send audit message: Operation not permitted Oct 14 05:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:42:46 localhost podman[243702]: 2025-10-14 09:42:46.348282852 +0000 UTC m=+0.168341080 container start 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:42:46 localhost podman[243702]: ceilometer_agent_compute Oct 14 05:42:46 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Validating config file Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Copying service configuration files Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: INFO:__main__:Writing out command to execute Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: ++ cat /run_command Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + ARGS= Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + sudo kolla_copy_cacerts Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: sudo: unable to send audit message: Operation not permitted Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + [[ ! -n '' ]] Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + . kolla_extend_start Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + umask 0022 Oct 14 05:42:46 localhost ceilometer_agent_compute[243716]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 14 05:42:46 localhost podman[243724]: 2025-10-14 09:42:46.45936735 +0000 UTC m=+0.106788600 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 05:42:46 localhost podman[243724]: 2025-10-14 09:42:46.466703431 +0000 UTC m=+0.114124711 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 05:42:46 localhost podman[243724]: unhealthy Oct 14 05:42:46 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:42:46 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.141 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.142 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.143 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.144 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.145 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.146 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.147 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.148 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.149 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.150 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.151 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.152 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.153 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.154 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.172 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.173 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.174 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.268 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 14 05:42:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64601 DF PROTO=TCP SPT=56794 DPT=9101 SEQ=964255269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AC7290000000001030307) Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.327 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.328 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.329 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.330 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.331 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.332 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.333 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.334 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.335 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.336 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.337 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.338 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.339 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.340 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.341 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.342 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.343 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.344 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.345 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.346 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.349 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.355 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.358 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.359 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.360 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.361 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.361 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.361 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.361 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:47 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:47.361 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:48 localhost python3.9[243860]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:42:48 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 14 05:42:48 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:48.323 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Oct 14 05:42:48 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:48.424 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Oct 14 05:42:48 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:48.424 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Oct 14 05:42:48 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:48.424 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Oct 14 05:42:48 localhost ceilometer_agent_compute[243716]: 2025-10-14 09:42:48.432 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Oct 14 05:42:48 localhost journal[235816]: End of file while reading data: Input/output error Oct 14 05:42:48 localhost journal[235816]: End of file while reading data: Input/output error Oct 14 05:42:48 localhost systemd[1]: libpod-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope: Deactivated successfully. Oct 14 05:42:48 localhost systemd[1]: libpod-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope: Consumed 1.181s CPU time. Oct 14 05:42:48 localhost podman[243864]: 2025-10-14 09:42:48.579118672 +0000 UTC m=+0.316535569 container died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:42:48 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.timer: Deactivated successfully. Oct 14 05:42:48 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:42:48 localhost systemd[1]: tmp-crun.9kqOgg.mount: Deactivated successfully. Oct 14 05:42:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7-userdata-shm.mount: Deactivated successfully. Oct 14 05:42:48 localhost systemd[1]: var-lib-containers-storage-overlay-580a94d220f8f7b88b6d35dc7cc2a43d8f9dc291f1fe97c534ccb4743deb8c0a-merged.mount: Deactivated successfully. Oct 14 05:42:48 localhost podman[243864]: 2025-10-14 09:42:48.657621465 +0000 UTC m=+0.395038352 container cleanup 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:42:48 localhost podman[243864]: ceilometer_agent_compute Oct 14 05:42:48 localhost podman[243888]: 2025-10-14 09:42:48.744222103 +0000 UTC m=+0.053280712 container cleanup 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:42:48 localhost podman[243888]: ceilometer_agent_compute Oct 14 05:42:48 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Oct 14 05:42:48 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 14 05:42:48 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 14 05:42:48 localhost systemd[1]: Started libcrun container. Oct 14 05:42:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580a94d220f8f7b88b6d35dc7cc2a43d8f9dc291f1fe97c534ccb4743deb8c0a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 14 05:42:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580a94d220f8f7b88b6d35dc7cc2a43d8f9dc291f1fe97c534ccb4743deb8c0a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 14 05:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:42:48 localhost podman[243901]: 2025-10-14 09:42:48.919431569 +0000 UTC m=+0.142579111 container init 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 05:42:48 localhost ceilometer_agent_compute[243915]: + sudo -E kolla_set_configs Oct 14 05:42:48 localhost ceilometer_agent_compute[243915]: sudo: unable to send audit message: Operation not permitted Oct 14 05:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:42:48 localhost podman[243901]: 2025-10-14 09:42:48.954951922 +0000 UTC m=+0.178099464 container start 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:42:48 localhost podman[243901]: ceilometer_agent_compute Oct 14 05:42:48 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Validating config file Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Copying service configuration files Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: INFO:__main__:Writing out command to execute Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: ++ cat /run_command Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + ARGS= Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + sudo kolla_copy_cacerts Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: sudo: unable to send audit message: Operation not permitted Oct 14 05:42:49 localhost podman[243924]: 2025-10-14 09:42:49.041332603 +0000 UTC m=+0.080037830 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm) Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + [[ ! -n '' ]] Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + . kolla_extend_start Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + umask 0022 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 14 05:42:49 localhost podman[243924]: 2025-10-14 09:42:49.073163076 +0000 UTC m=+0.111868313 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:42:49 localhost podman[243924]: unhealthy Oct 14 05:42:49 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:42:49 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:42:49 localhost python3.9[244055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.758 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.759 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.760 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.761 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.762 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.763 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.764 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.765 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.766 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.767 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.768 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.769 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.770 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.771 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.772 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.790 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.791 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.792 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.809 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.934 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.935 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.936 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.937 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.938 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.939 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.940 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.941 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.942 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.943 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.944 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.949 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.950 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.951 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.952 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.952 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.955 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.963 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:42:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:42:51 localhost python3.9[244149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434969.167531-1544-143835811525874/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:42:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64602 DF PROTO=TCP SPT=56794 DPT=9101 SEQ=964255269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AD6E90000000001030307) Oct 14 05:42:52 localhost python3.9[244259]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Oct 14 05:42:53 localhost python3.9[244369]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:42:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31388 DF PROTO=TCP SPT=47980 DPT=9102 SEQ=3067647840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AE07E0000000001030307) Oct 14 05:42:54 localhost python3[244479]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:42:54 localhost podman[244514]: Oct 14 05:42:54 localhost podman[244514]: 2025-10-14 09:42:54.364394762 +0000 UTC m=+0.076823752 container create c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors , config_id=edpm) Oct 14 05:42:54 localhost podman[244514]: 2025-10-14 09:42:54.322893599 +0000 UTC m=+0.035322649 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Oct 14 05:42:54 localhost python3[244479]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Oct 14 05:42:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31389 DF PROTO=TCP SPT=47980 DPT=9102 SEQ=3067647840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AE4690000000001030307) Oct 14 05:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:42:55 localhost podman[244660]: 2025-10-14 09:42:55.212802577 +0000 UTC m=+0.076816803 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 05:42:55 localhost podman[244660]: 2025-10-14 09:42:55.225086009 +0000 UTC m=+0.089100275 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:42:55 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:42:55 localhost python3.9[244659]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:42:56 localhost python3.9[244788]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:42:56 localhost podman[244898]: 2025-10-14 09:42:56.83029823 +0000 UTC m=+0.081183045 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:42:56 localhost podman[244898]: 2025-10-14 09:42:56.845269312 +0000 UTC m=+0.096154127 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:42:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:42:56 localhost python3.9[244897]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434976.2641137-1703-32412629118696/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:42:57 localhost python3.9[244971]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:42:57 localhost systemd[1]: Reloading. Oct 14 05:42:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:42:57.599 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:42:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:42:57.599 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:42:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:42:57.599 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:42:57 localhost systemd-sysv-generator[244999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:57 localhost systemd-rc-local-generator[244993]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:58 localhost python3.9[245062]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:42:58 localhost systemd[1]: Reloading. Oct 14 05:42:58 localhost systemd-sysv-generator[245091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:42:58 localhost systemd-rc-local-generator[245086]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:42:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:42:58 localhost systemd[1]: Starting node_exporter container... Oct 14 05:42:58 localhost systemd[1]: Started libcrun container. Oct 14 05:42:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:42:58 localhost podman[245103]: 2025-10-14 09:42:58.980633596 +0000 UTC m=+0.149169870 container init c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:42:58 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.998Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.998Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.998Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:58.999Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=arp Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=bcache Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=bonding Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=btrfs Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=conntrack Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=cpu Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=cpufreq Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=diskstats Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=edac Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=fibrechannel Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=filefd Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=filesystem Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=infiniband Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=ipvs Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=loadavg Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=mdadm Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=meminfo Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=netclass Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=netdev Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=netstat Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=nfs Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=nfsd Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=nvme Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=schedstat Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=sockstat Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=softnet Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=systemd Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=tapestats Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=udp_queues Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=vmstat Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=xfs Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.000Z caller=node_exporter.go:117 level=info collector=zfs Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.001Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Oct 14 05:42:59 localhost node_exporter[245117]: ts=2025-10-14T09:42:59.001Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Oct 14 05:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:42:59 localhost podman[245103]: 2025-10-14 09:42:59.016436438 +0000 UTC m=+0.184972712 container start c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:42:59 localhost podman[245103]: node_exporter Oct 14 05:42:59 localhost systemd[1]: Started node_exporter container. Oct 14 05:42:59 localhost podman[245126]: 2025-10-14 09:42:59.101714496 +0000 UTC m=+0.079504604 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:42:59 localhost podman[245126]: 2025-10-14 09:42:59.112132211 +0000 UTC m=+0.089922339 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:42:59 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:43:00 localhost python3.9[245258]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:43:00 localhost systemd[1]: Stopping node_exporter container... Oct 14 05:43:00 localhost podman[245262]: 2025-10-14 09:43:00.130302498 +0000 UTC m=+0.067134641 container died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:43:00 localhost systemd[1]: tmp-crun.cuRmBN.mount: Deactivated successfully. Oct 14 05:43:00 localhost systemd[1]: libpod-c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.scope: Deactivated successfully. Oct 14 05:43:00 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.timer: Deactivated successfully. Oct 14 05:43:00 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:43:00 localhost podman[245262]: 2025-10-14 09:43:00.179182325 +0000 UTC m=+0.116014448 container cleanup c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:43:00 localhost podman[245262]: node_exporter Oct 14 05:43:00 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 14 05:43:00 localhost podman[245287]: 2025-10-14 09:43:00.269101493 +0000 UTC m=+0.062905312 container cleanup c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:43:00 localhost podman[245287]: node_exporter Oct 14 05:43:00 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Oct 14 05:43:00 localhost systemd[1]: Stopped node_exporter container. Oct 14 05:43:00 localhost systemd[1]: Starting node_exporter container... Oct 14 05:43:00 localhost systemd[1]: Started libcrun container. Oct 14 05:43:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60666 DF PROTO=TCP SPT=59516 DPT=9100 SEQ=795481246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AFA630000000001030307) Oct 14 05:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:43:00 localhost podman[245298]: 2025-10-14 09:43:00.436274606 +0000 UTC m=+0.134258439 container init c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.449Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.450Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.450Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.450Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.450Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.450Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.451Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.451Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.451Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=arp Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=bcache Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=bonding Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=btrfs Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=conntrack Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=cpu Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=cpufreq Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=diskstats Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=edac Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=fibrechannel Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=filefd Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=filesystem Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=infiniband Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=ipvs Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=loadavg Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=mdadm Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=meminfo Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=netclass Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=netdev Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=netstat Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=nfs Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=nfsd Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=nvme Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=schedstat Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=sockstat Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=softnet Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=systemd Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=tapestats Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=udp_queues Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=vmstat Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=xfs Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=node_exporter.go:117 level=info collector=zfs Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.452Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Oct 14 05:43:00 localhost node_exporter[245313]: ts=2025-10-14T09:43:00.453Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Oct 14 05:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:43:00 localhost podman[245298]: 2025-10-14 09:43:00.477159101 +0000 UTC m=+0.175142864 container start c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:43:00 localhost podman[245298]: node_exporter Oct 14 05:43:00 localhost systemd[1]: Started node_exporter container. Oct 14 05:43:00 localhost podman[245322]: 2025-10-14 09:43:00.572824664 +0000 UTC m=+0.091821577 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:43:00 localhost podman[245322]: 2025-10-14 09:43:00.583013521 +0000 UTC m=+0.102010434 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:43:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49004 DF PROTO=TCP SPT=38928 DPT=9882 SEQ=4183434105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761AFB0A0000000001030307) Oct 14 05:43:00 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:43:01 localhost python3.9[245454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:43:02 localhost python3.9[245542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760434980.879307-1799-127306573975699/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:43:03 localhost python3.9[245652]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Oct 14 05:43:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60668 DF PROTO=TCP SPT=59516 DPT=9100 SEQ=795481246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B06690000000001030307) Oct 14 05:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:43:04 localhost systemd[1]: tmp-crun.s9zVbz.mount: Deactivated successfully. Oct 14 05:43:04 localhost podman[245762]: 2025-10-14 09:43:04.031422816 +0000 UTC m=+0.089502147 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible) Oct 14 05:43:04 localhost podman[245762]: 2025-10-14 09:43:04.069053374 +0000 UTC m=+0.127132715 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller) Oct 14 05:43:04 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:43:04 localhost python3.9[245763]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:43:05 localhost python3[245897]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:43:07 localhost podman[245912]: 2025-10-14 09:43:05.741515666 +0000 UTC m=+0.044028792 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 14 05:43:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60669 DF PROTO=TCP SPT=59516 DPT=9100 SEQ=795481246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B162A0000000001030307) Oct 14 05:43:07 localhost podman[245985]: Oct 14 05:43:07 localhost podman[245985]: 2025-10-14 09:43:07.607352935 +0000 UTC m=+0.093712034 container create fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi , config_id=edpm, container_name=podman_exporter) Oct 14 05:43:07 localhost podman[245985]: 2025-10-14 09:43:07.562777957 +0000 UTC m=+0.049137096 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 14 05:43:07 localhost python3[245897]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 14 05:43:08 localhost python3.9[246133]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:43:09 localhost python3.9[246245]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:43:10 localhost python3.9[246354]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760434989.4075077-1958-125610855292388/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:43:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60002 DF PROTO=TCP SPT=36334 DPT=9105 SEQ=3776600022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B21690000000001030307) Oct 14 05:43:10 localhost python3.9[246409]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:43:10 localhost systemd[1]: Reloading. Oct 14 05:43:10 localhost systemd-sysv-generator[246435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:43:10 localhost systemd-rc-local-generator[246430]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:43:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:43:11 localhost podman[246446]: 2025-10-14 09:43:11.1828266 +0000 UTC m=+0.085518537 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:43:11 localhost podman[246446]: 2025-10-14 09:43:11.19509387 +0000 UTC m=+0.097785817 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 05:43:11 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:43:11 localhost python3.9[246518]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:43:11 localhost systemd[1]: Reloading. Oct 14 05:43:11 localhost systemd-rc-local-generator[246541]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:43:11 localhost systemd-sysv-generator[246544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:43:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:43:12 localhost systemd[1]: Starting podman_exporter container... Oct 14 05:43:12 localhost systemd[1]: Started libcrun container. Oct 14 05:43:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:12 localhost podman[246559]: 2025-10-14 09:43:12.333191052 +0000 UTC m=+0.128819615 container init fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:43:12 localhost podman_exporter[246573]: ts=2025-10-14T09:43:12.358Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Oct 14 05:43:12 localhost podman_exporter[246573]: ts=2025-10-14T09:43:12.358Z caller=exporter.go:69 level=info msg=metrics enhanced=false Oct 14 05:43:12 localhost podman_exporter[246573]: ts=2025-10-14T09:43:12.358Z caller=handler.go:94 level=info msg="enabled collectors" Oct 14 05:43:12 localhost podman_exporter[246573]: ts=2025-10-14T09:43:12.358Z caller=handler.go:105 level=info collector=container Oct 14 05:43:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:12 localhost systemd[1]: Starting Podman API Service... Oct 14 05:43:12 localhost systemd[1]: Started Podman API Service. Oct 14 05:43:12 localhost podman[246559]: 2025-10-14 09:43:12.379106649 +0000 UTC m=+0.174735222 container start fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:43:12 localhost podman[246559]: podman_exporter Oct 14 05:43:12 localhost systemd[1]: Started podman_exporter container. Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="/usr/bin/podman filtering at log level info" Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="Setting parallel job count to 25" Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="Using systemd socket activation to determine API endpoint" Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Oct 14 05:43:12 localhost podman[246584]: @ - - [14/Oct/2025:09:43:12 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Oct 14 05:43:12 localhost podman[246584]: time="2025-10-14T09:43:12Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:43:12 localhost podman[246583]: 2025-10-14 09:43:12.525475443 +0000 UTC m=+0.151524371 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:43:12 localhost podman[246583]: 2025-10-14 09:43:12.533348352 +0000 UTC m=+0.159397300 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:12 localhost podman[246583]: unhealthy Oct 14 05:43:13 localhost systemd[1]: tmp-crun.zJNrr3.mount: Deactivated successfully. Oct 14 05:43:14 localhost python3.9[246728]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:43:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34294 DF PROTO=TCP SPT=42990 DPT=9101 SEQ=1886810571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B30440000000001030307) Oct 14 05:43:14 localhost systemd[1]: Stopping podman_exporter container... Oct 14 05:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:15 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:15 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:43:15 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:43:15 localhost podman[246584]: @ - - [14/Oct/2025:09:43:12 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1" Oct 14 05:43:15 localhost systemd[1]: libpod-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.scope: Deactivated successfully. Oct 14 05:43:15 localhost podman[246732]: 2025-10-14 09:43:15.318769796 +0000 UTC m=+1.057298050 container died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:43:15 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.timer: Deactivated successfully. Oct 14 05:43:15 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc-userdata-shm.mount: Deactivated successfully. Oct 14 05:43:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34296 DF PROTO=TCP SPT=42990 DPT=9101 SEQ=1886810571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B3C690000000001030307) Oct 14 05:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-b2afe959211bce9082207305305cedf79a695a1ec65409ce1177f777d128d5cc-merged.mount: Deactivated successfully. Oct 14 05:43:17 localhost podman[246732]: 2025-10-14 09:43:17.400681385 +0000 UTC m=+3.139209619 container cleanup fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:17 localhost podman[246732]: podman_exporter Oct 14 05:43:17 localhost podman[246744]: 2025-10-14 09:43:17.41175441 +0000 UTC m=+2.089703936 container cleanup fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:18 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 14 05:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:19 localhost podman[246808]: 2025-10-14 09:43:19.2068398 +0000 UTC m=+0.688303436 container cleanup fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:19 localhost podman[246808]: podman_exporter Oct 14 05:43:19 localhost podman[246819]: 2025-10-14 09:43:19.28059296 +0000 UTC m=+0.148958645 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 05:43:19 localhost podman[246819]: 2025-10-14 09:43:19.310486653 +0000 UTC m=+0.178852278 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:43:19 localhost podman[246819]: unhealthy Oct 14 05:43:19 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:43:19 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:43:19 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Oct 14 05:43:19 localhost systemd[1]: Stopped podman_exporter container. Oct 14 05:43:19 localhost systemd[1]: Starting podman_exporter container... Oct 14 05:43:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34297 DF PROTO=TCP SPT=42990 DPT=9101 SEQ=1886810571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B4C290000000001030307) Oct 14 05:43:21 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:21 localhost systemd[1]: var-lib-containers-storage-overlay-e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd-merged.mount: Deactivated successfully. Oct 14 05:43:22 localhost systemd[1]: Started libcrun container. Oct 14 05:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:22 localhost podman[246856]: 2025-10-14 09:43:22.169942645 +0000 UTC m=+2.539338727 container init fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:22 localhost podman_exporter[246870]: ts=2025-10-14T09:43:22.188Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Oct 14 05:43:22 localhost podman_exporter[246870]: ts=2025-10-14T09:43:22.188Z caller=exporter.go:69 level=info msg=metrics enhanced=false Oct 14 05:43:22 localhost podman[246584]: @ - - [14/Oct/2025:09:43:22 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Oct 14 05:43:22 localhost podman[246584]: time="2025-10-14T09:43:22Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:43:22 localhost podman_exporter[246870]: ts=2025-10-14T09:43:22.188Z caller=handler.go:94 level=info msg="enabled collectors" Oct 14 05:43:22 localhost podman_exporter[246870]: ts=2025-10-14T09:43:22.188Z caller=handler.go:105 level=info collector=container Oct 14 05:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:22 localhost podman[246856]: 2025-10-14 09:43:22.255548883 +0000 UTC m=+2.624944895 container start fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:22 localhost podman[246856]: podman_exporter Oct 14 05:43:22 localhost podman[246880]: 2025-10-14 09:43:22.282547809 +0000 UTC m=+0.075199024 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:43:22 localhost podman[246880]: 2025-10-14 09:43:22.29382532 +0000 UTC m=+0.086476545 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:43:22 localhost podman[246880]: unhealthy Oct 14 05:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30506 DF PROTO=TCP SPT=32908 DPT=9102 SEQ=524359593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B55AE0000000001030307) Oct 14 05:43:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30507 DF PROTO=TCP SPT=32908 DPT=9102 SEQ=524359593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B59A90000000001030307) Oct 14 05:43:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:25 localhost systemd[1]: Started podman_exporter container. Oct 14 05:43:25 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:25 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:25 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:43:25 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:43:25 localhost podman[247025]: 2025-10-14 09:43:25.561642136 +0000 UTC m=+0.102778218 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:43:25 localhost podman[247025]: 2025-10-14 09:43:25.601055457 +0000 UTC m=+0.142191549 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid) Oct 14 05:43:25 localhost python3.9[247042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:43:26 localhost python3.9[247136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435005.2271206-2054-144399806072535/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 14 05:43:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:43:26 localhost podman[246584]: time="2025-10-14T09:43:26Z" level=error msg="Getting root fs size for \"0b30aab1d44260dad12c48fb5b50a655699fd9392ac58c201410b4ab939a3139\": getting diffsize of layer \"948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer 948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca: replacing mount point \"/var/lib/containers/storage/overlay/948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca/merged\": device or resource busy" Oct 14 05:43:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:27 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:27 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:43:27 localhost podman[247181]: 2025-10-14 09:43:27.216951201 +0000 UTC m=+0.296729291 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:43:27 localhost podman[247181]: 2025-10-14 09:43:27.225812408 +0000 UTC m=+0.305590538 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:43:27 localhost python3.9[247257]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Oct 14 05:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:28 localhost python3.9[247374]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:29 localhost python3[247484]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-e28793152a1a08ef6d85a0f8369b6de4304acf0fcefe34329896abb9348d5919-merged.mount: Deactivated successfully. Oct 14 05:43:29 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:43:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39425 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=3132877332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B6F940000000001030307) Oct 14 05:43:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:30 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21423 DF PROTO=TCP SPT=39392 DPT=9882 SEQ=2251928576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B703A0000000001030307) Oct 14 05:43:30 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:43:30 localhost podman[247513]: 2025-10-14 09:43:30.900186261 +0000 UTC m=+0.075360229 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:43:30 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:30 localhost podman[247513]: 2025-10-14 09:43:30.93290301 +0000 UTC m=+0.108076958 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:43:32 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:32 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 14 05:43:32 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 14 05:43:33 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:33 localhost systemd[1]: var-lib-containers-storage-overlay-e64b1e8bff0d16ef1fc588a2601fcfa122bfb13336b10b2850f483736795f5fd-merged.mount: Deactivated successfully. Oct 14 05:43:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39427 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=3132877332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B7BA90000000001030307) Oct 14 05:43:33 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:43:34 localhost systemd[1]: tmp-crun.2CGzM4.mount: Deactivated successfully. Oct 14 05:43:34 localhost podman[247536]: 2025-10-14 09:43:34.342227293 +0000 UTC m=+0.128707121 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:43:34 localhost podman[247536]: 2025-10-14 09:43:34.38114325 +0000 UTC m=+0.167623148 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009) Oct 14 05:43:35 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:35 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:35 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:36 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:36 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:36 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:43:36 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39428 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=3132877332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B8B6A0000000001030307) Oct 14 05:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64422 DF PROTO=TCP SPT=49172 DPT=9105 SEQ=590631173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761B96A90000000001030307) Oct 14 05:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 14 05:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7-merged.mount: Deactivated successfully. Oct 14 05:43:41 localhost podman[247572]: 2025-10-14 09:43:41.344158563 +0000 UTC m=+0.065291724 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:43:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7-merged.mount: Deactivated successfully. Oct 14 05:43:41 localhost podman[247572]: 2025-10-14 09:43:41.379220089 +0000 UTC m=+0.100353260 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:43:42 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:43:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.328 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.360 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.361 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.361 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.381 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.381 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.382 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.382 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.382 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.383 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.383 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.399 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.400 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.400 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.400 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.401 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:43:43 localhost nova_compute[236479]: 2025-10-14 09:43:43.862 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:43:44 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.059 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.060 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13126MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.061 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.061 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.152 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.152 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.176 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:43:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1675 DF PROTO=TCP SPT=59588 DPT=9101 SEQ=3295771881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BA5730000000001030307) Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.629 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.635 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.668 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.671 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:43:44 localhost nova_compute[236479]: 2025-10-14 09:43:44.671 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:43:45 localhost nova_compute[236479]: 2025-10-14 09:43:45.454 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:45 localhost nova_compute[236479]: 2025-10-14 09:43:45.454 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:45 localhost nova_compute[236479]: 2025-10-14 09:43:45.454 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:43:45 localhost systemd[1]: var-lib-containers-storage-overlay-e28793152a1a08ef6d85a0f8369b6de4304acf0fcefe34329896abb9348d5919-merged.mount: Deactivated successfully. Oct 14 05:43:46 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:46 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:47 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1677 DF PROTO=TCP SPT=59588 DPT=9101 SEQ=3295771881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BB16A0000000001030307) Oct 14 05:43:47 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 14 05:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:43:49 localhost podman[247646]: 2025-10-14 09:43:49.841511542 +0000 UTC m=+0.084123898 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:43:49 localhost podman[246584]: time="2025-10-14T09:43:49Z" level=error msg="Getting root fs size for \"19f48e6e854e4c9d6ac9b3074f536644676c6d9a63f46fe5f18bcf66de085e9c\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 14 05:43:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:49 localhost podman[247646]: 2025-10-14 09:43:49.87422281 +0000 UTC m=+0.116835146 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:43:49 localhost podman[247646]: unhealthy Oct 14 05:43:50 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:50 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1678 DF PROTO=TCP SPT=59588 DPT=9101 SEQ=3295771881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BC1290000000001030307) Oct 14 05:43:52 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:43:52 localhost systemd[1]: var-lib-containers-storage-overlay-5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67-merged.mount: Deactivated successfully. Oct 14 05:43:52 localhost systemd[1]: var-lib-containers-storage-overlay-5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67-merged.mount: Deactivated successfully. Oct 14 05:43:52 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:43:52 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:43:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35713 DF PROTO=TCP SPT=38500 DPT=9102 SEQ=3643779073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BCADD0000000001030307) Oct 14 05:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 14 05:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7-merged.mount: Deactivated successfully. Oct 14 05:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35714 DF PROTO=TCP SPT=38500 DPT=9102 SEQ=3643779073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BCEEA0000000001030307) Oct 14 05:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-c0c763704100a115f96b041a65b3a8f6522965320f15224e7afd8516b03357b7-merged.mount: Deactivated successfully. Oct 14 05:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-fc06e989b61b0623172ed8f6228aeadb5ab4e2033fa5c722e42cb9029cc166b7-merged.mount: Deactivated successfully. Oct 14 05:43:55 localhost podman[247674]: 2025-10-14 09:43:55.290483996 +0000 UTC m=+0.079602221 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:43:55 localhost podman[247674]: 2025-10-14 09:43:55.299518672 +0000 UTC m=+0.088636887 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:43:55 localhost podman[247674]: unhealthy Oct 14 05:43:56 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:43:56 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:43:57 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:43:57 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:43:57 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:43:57 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:43:57 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:43:57 localhost podman[247500]: 2025-10-14 09:43:30.551236304 +0000 UTC m=+0.047840677 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 14 05:43:57 localhost podman[247697]: 2025-10-14 09:43:57.531521988 +0000 UTC m=+0.288504889 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true) Oct 14 05:43:57 localhost podman[247697]: 2025-10-14 09:43:57.544266469 +0000 UTC m=+0.301249310 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:43:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:43:57.600 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:43:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:43:57.600 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:43:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:43:57.600 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:43:59 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:43:59 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:43:59 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:43:59 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:43:59 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:44:00 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:44:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22099 DF PROTO=TCP SPT=57276 DPT=9100 SEQ=3880783864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BE4C30000000001030307) Oct 14 05:44:00 localhost podman[247728]: 2025-10-14 09:44:00.476285031 +0000 UTC m=+0.636968558 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 05:44:00 localhost podman[247728]: 2025-10-14 09:44:00.512599592 +0000 UTC m=+0.673283099 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:44:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1436 DF PROTO=TCP SPT=55750 DPT=9882 SEQ=2158350137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BE56B0000000001030307) Oct 14 05:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:44:02 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:02 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:03 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:03 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:44:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22101 DF PROTO=TCP SPT=57276 DPT=9100 SEQ=3880783864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761BF0EA0000000001030307) Oct 14 05:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:44:03 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:44:04 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:04 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:05 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:05 localhost podman[247770]: 2025-10-14 09:44:05.292537508 +0000 UTC m=+1.580099945 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:44:05 localhost podman[247770]: 2025-10-14 09:44:05.297517379 +0000 UTC m=+1.585079846 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:44:05 localhost podman[247758]: 2025-10-14 09:44:03.146002819 +0000 UTC m=+2.638400800 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 14 05:44:06 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:06 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:44:06 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:07 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:44:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:07 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:44:07 localhost podman[247793]: 2025-10-14 09:44:07.345163462 +0000 UTC m=+0.956270119 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:07 localhost podman[247793]: 2025-10-14 09:44:07.381986087 +0000 UTC m=+0.993092774 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:44:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22102 DF PROTO=TCP SPT=57276 DPT=9100 SEQ=3880783864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C00A90000000001030307) Oct 14 05:44:08 localhost podman[247758]: Oct 14 05:44:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:08 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:44:08 localhost podman[247758]: 2025-10-14 09:44:08.084874296 +0000 UTC m=+7.577272217 container create 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible) Oct 14 05:44:08 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:44:08 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:08 localhost python3[247484]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 14 05:44:09 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7917 DF PROTO=TCP SPT=33974 DPT=9105 SEQ=63105435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C0BE90000000001030307) Oct 14 05:44:11 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:11 localhost systemd[1]: var-lib-containers-storage-overlay-5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67-merged.mount: Deactivated successfully. Oct 14 05:44:11 localhost systemd[1]: var-lib-containers-storage-overlay-5c1375b47f7238425ac168df0b31eebcac7daf8f7b82fa846760d02ff141bc67-merged.mount: Deactivated successfully. Oct 14 05:44:12 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:44:12 localhost systemd[1]: var-lib-containers-storage-overlay-d4bfc1d5359a39ca467891151850ad29ab2405c99c0e73704689224632337029-merged.mount: Deactivated successfully. Oct 14 05:44:12 localhost systemd[1]: var-lib-containers-storage-overlay-d4bfc1d5359a39ca467891151850ad29ab2405c99c0e73704689224632337029-merged.mount: Deactivated successfully. Oct 14 05:44:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:44:13 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:13 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5477 DF PROTO=TCP SPT=55236 DPT=9101 SEQ=2004862210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C1AA30000000001030307) Oct 14 05:44:14 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:14 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:14 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:14 localhost podman[247839]: 2025-10-14 09:44:14.870881605 +0000 UTC m=+1.916829781 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:44:14 localhost podman[247839]: 2025-10-14 09:44:14.900554118 +0000 UTC m=+1.946502294 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:15 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:16 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:16 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:16 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:44:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5479 DF PROTO=TCP SPT=55236 DPT=9101 SEQ=2004862210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C26A90000000001030307) Oct 14 05:44:17 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:44:17 localhost systemd[1]: var-lib-containers-storage-overlay-fc06e989b61b0623172ed8f6228aeadb5ab4e2033fa5c722e42cb9029cc166b7-merged.mount: Deactivated successfully. Oct 14 05:44:18 localhost python3.9[247966]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:44:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:18 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:19 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:44:19 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:44:19 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:44:20 localhost python3.9[248078]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:44:21 localhost python3.9[248187]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435060.3310459-2213-175235171777185/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:44:21 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:44:21 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:44:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5480 DF PROTO=TCP SPT=55236 DPT=9101 SEQ=2004862210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C36690000000001030307) Oct 14 05:44:21 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:44:21 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:21 localhost python3.9[248242]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:44:21 localhost systemd[1]: var-lib-containers-storage-overlay-fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2-merged.mount: Deactivated successfully. Oct 14 05:44:21 localhost systemd[1]: Reloading. Oct 14 05:44:21 localhost systemd-rc-local-generator[248269]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:44:21 localhost systemd-sysv-generator[248272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:44:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:44:22 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:44:22 localhost python3.9[248333]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:44:22 localhost systemd[1]: Reloading. Oct 14 05:44:22 localhost podman[248335]: 2025-10-14 09:44:22.952213777 +0000 UTC m=+0.125521194 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute) Oct 14 05:44:22 localhost systemd-sysv-generator[248381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:44:22 localhost systemd-rc-local-generator[248376]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:44:22 localhost podman[248335]: 2025-10-14 09:44:22.991012298 +0000 UTC m=+0.164319745 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:44:22 localhost podman[248335]: unhealthy Oct 14 05:44:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:44:23 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:44:23 localhost systemd[1]: Starting openstack_network_exporter container... Oct 14 05:44:23 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:44:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5693 DF PROTO=TCP SPT=55784 DPT=9102 SEQ=2373881454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C400D0000000001030307) Oct 14 05:44:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:24 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:44:24 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:44:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5694 DF PROTO=TCP SPT=55784 DPT=9102 SEQ=2373881454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C44290000000001030307) Oct 14 05:44:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:26 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:44:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:26 localhost systemd[1]: Started libcrun container. Oct 14 05:44:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8b5388ddff6e4942dd3389b97a5937ebc9c9ff248f30ace5a77b2bba77b47f/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 14 05:44:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8b5388ddff6e4942dd3389b97a5937ebc9c9ff248f30ace5a77b2bba77b47f/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Oct 14 05:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:44:26 localhost podman[248390]: 2025-10-14 09:44:26.584054791 +0000 UTC m=+3.381519051 container init 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *bridge.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *coverage.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *datapath.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *iface.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *memory.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *ovnnorthd.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *ovn.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *ovsdbserver.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *pmd_perf.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *pmd_rxq.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: INFO 09:44:26 main.go:48: registering *vswitch.Collector Oct 14 05:44:26 localhost openstack_network_exporter[248454]: NOTICE 09:44:26 main.go:82: listening on http://:9105/metrics Oct 14 05:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:44:26 localhost podman[248390]: 2025-10-14 09:44:26.619546427 +0000 UTC m=+3.417010667 container start 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 05:44:26 localhost podman[248390]: openstack_network_exporter Oct 14 05:44:27 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:44:27 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:44:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:44:27 localhost systemd[1]: Started openstack_network_exporter container. Oct 14 05:44:27 localhost podman[246584]: time="2025-10-14T09:44:27Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged: device or resource busy" Oct 14 05:44:27 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:27 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:27 localhost podman[246584]: time="2025-10-14T09:44:27Z" level=error msg="Getting root fs size for \"26ce00702516d8caef8a1efc4edd111297d4dc7c79183231a2a15b9823bbc3f3\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": creating overlay mount to /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/Z2VNKBZE3BJTC5EX26JLUJ6NNV,upperdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/diff,workdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/work,nodev,metacopy=on\": no such file or directory" Oct 14 05:44:27 localhost podman[248465]: 2025-10-14 09:44:27.734283694 +0000 UTC m=+1.108400867 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41) Oct 14 05:44:27 localhost podman[248465]: 2025-10-14 09:44:27.773394024 +0000 UTC m=+1.147511167 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Oct 14 05:44:27 localhost podman[248507]: 2025-10-14 09:44:27.79264553 +0000 UTC m=+0.249137131 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:44:27 localhost podman[248507]: 2025-10-14 09:44:27.833196281 +0000 UTC m=+0.289687822 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:44:27 localhost podman[248507]: unhealthy Oct 14 05:44:28 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:28 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:44:28 localhost python3.9[248649]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:44:28 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:44:28 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:44:28 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:44:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:28 localhost systemd[1]: Stopping openstack_network_exporter container... Oct 14 05:44:28 localhost systemd[1]: libpod-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.scope: Deactivated successfully. Oct 14 05:44:28 localhost podman[248656]: 2025-10-14 09:44:28.678590314 +0000 UTC m=+0.085162878 container died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 05:44:28 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.timer: Deactivated successfully. Oct 14 05:44:28 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:44:29 localhost systemd[1]: var-lib-containers-storage-overlay-d4bfc1d5359a39ca467891151850ad29ab2405c99c0e73704689224632337029-merged.mount: Deactivated successfully. Oct 14 05:44:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749-userdata-shm.mount: Deactivated successfully. Oct 14 05:44:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:44:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43813 DF PROTO=TCP SPT=39298 DPT=9100 SEQ=3721922932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C59F40000000001030307) Oct 14 05:44:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49811 DF PROTO=TCP SPT=56524 DPT=9882 SEQ=1947386652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C5A990000000001030307) Oct 14 05:44:30 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:30 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:31 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:31 localhost systemd[1]: var-lib-containers-storage-overlay-0a8b5388ddff6e4942dd3389b97a5937ebc9c9ff248f30ace5a77b2bba77b47f-merged.mount: Deactivated successfully. Oct 14 05:44:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:31 localhost podman[248683]: 2025-10-14 09:44:31.184955096 +0000 UTC m=+1.721045875 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:44:31 localhost podman[248656]: 2025-10-14 09:44:31.208659018 +0000 UTC m=+2.615231562 container cleanup 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Oct 14 05:44:31 localhost podman[248656]: openstack_network_exporter Oct 14 05:44:31 localhost podman[248683]: 2025-10-14 09:44:31.220855655 +0000 UTC m=+1.756946474 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:44:31 localhost podman[248670]: 2025-10-14 09:44:31.223695475 +0000 UTC m=+2.540578243 container cleanup 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:44:32 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:32 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:33 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:44:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43815 DF PROTO=TCP SPT=39298 DPT=9100 SEQ=3721922932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C65E90000000001030307) Oct 14 05:44:33 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:33 localhost systemd[1]: var-lib-containers-storage-overlay-e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323-merged.mount: Deactivated successfully. Oct 14 05:44:33 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:44:33 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 14 05:44:33 localhost podman[248704]: 2025-10-14 09:44:33.900953986 +0000 UTC m=+0.443639841 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 14 05:44:33 localhost podman[248704]: 2025-10-14 09:44:33.916907539 +0000 UTC m=+0.459593334 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:44:34 localhost systemd[1]: tmp-crun.oTJ6HC.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-9353b4c9b77a60c02d5cd3c8f9b94918c7a607156d2f7e1365b30ffe1fa49c89-merged.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d-merged.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:34 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:44:34 localhost podman[248716]: 2025-10-14 09:44:34.974817344 +0000 UTC m=+1.095313227 container cleanup 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41) Oct 14 05:44:34 localhost podman[248716]: openstack_network_exporter Oct 14 05:44:35 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'. Oct 14 05:44:35 localhost systemd[1]: Stopped openstack_network_exporter container. Oct 14 05:44:35 localhost systemd[1]: Starting openstack_network_exporter container... Oct 14 05:44:35 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:36 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:36 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:44:36 localhost systemd[1]: Started libcrun container. Oct 14 05:44:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8b5388ddff6e4942dd3389b97a5937ebc9c9ff248f30ace5a77b2bba77b47f/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 14 05:44:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8b5388ddff6e4942dd3389b97a5937ebc9c9ff248f30ace5a77b2bba77b47f/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Oct 14 05:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:44:36 localhost podman[248734]: 2025-10-14 09:44:36.179630517 +0000 UTC m=+0.537169157 container init 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350) Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *bridge.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *coverage.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *datapath.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *iface.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *memory.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *ovnnorthd.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *ovn.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *ovsdbserver.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *pmd_perf.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *pmd_rxq.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: INFO 09:44:36 main.go:48: registering *vswitch.Collector Oct 14 05:44:36 localhost openstack_network_exporter[248748]: NOTICE 09:44:36 main.go:82: listening on http://:9105/metrics Oct 14 05:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:44:36 localhost podman[248734]: 2025-10-14 09:44:36.21322676 +0000 UTC m=+0.570765400 container start 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 05:44:36 localhost podman[248734]: openstack_network_exporter Oct 14 05:44:36 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:44:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43816 DF PROTO=TCP SPT=39298 DPT=9100 SEQ=3721922932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C75A90000000001030307) Oct 14 05:44:38 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:44:38 localhost systemd[1]: var-lib-containers-storage-overlay-fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2-merged.mount: Deactivated successfully. Oct 14 05:44:38 localhost systemd[1]: var-lib-containers-storage-overlay-fd3a8c871077882fdb1447d21aa393eaa0b8c213ba80c4c5d1751225817fb0a2-merged.mount: Deactivated successfully. Oct 14 05:44:38 localhost systemd[1]: Started openstack_network_exporter container. Oct 14 05:44:38 localhost podman[248758]: 2025-10-14 09:44:38.586089583 +0000 UTC m=+2.369810307 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Oct 14 05:44:38 localhost podman[248770]: 2025-10-14 09:44:38.621349773 +0000 UTC m=+1.165071765 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:44:38 localhost podman[248758]: 2025-10-14 09:44:38.680151402 +0000 UTC m=+2.463872146 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 05:44:38 localhost podman[248770]: 2025-10-14 09:44:38.709345031 +0000 UTC m=+1.253067093 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:44:39 localhost podman[248782]: 2025-10-14 09:44:39.227667541 +0000 UTC m=+0.871638838 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:44:39 localhost podman[248782]: 2025-10-14 09:44:39.294823757 +0000 UTC m=+0.938795084 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, config_id=ovn_controller) Oct 14 05:44:39 localhost systemd[1]: var-lib-containers-storage-overlay-41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d-merged.mount: Deactivated successfully. Oct 14 05:44:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57903 DF PROTO=TCP SPT=57558 DPT=9105 SEQ=2320853500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C80E90000000001030307) Oct 14 05:44:40 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:40 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:41 localhost python3.9[248933]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:44:41 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:41 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:44:41 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:44:41 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:44:42 localhost python3.9[249044]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:42 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.190 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.191 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.191 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.191 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.192 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.647 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:44:43 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:43 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.810 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.811 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13182MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.812 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.812 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:44:43 localhost podman[246584]: time="2025-10-14T09:44:43Z" level=error msg="Getting root fs size for \"3bd5ec280fe2f8000ac11b65895db7354ee6b92332d6a54829a6092b0e90bff8\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.887 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.888 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:44:43 localhost nova_compute[236479]: 2025-10-14 09:44:43.912 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost python3.9[249190]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:44:44 localhost systemd[1]: Started libpod-conmon-328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.scope. Oct 14 05:44:44 localhost podman[249210]: 2025-10-14 09:44:44.211290525 +0000 UTC m=+0.103149292 container exec 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4234 DF PROTO=TCP SPT=43630 DPT=9101 SEQ=579005353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C8FD30000000001030307) Oct 14 05:44:44 localhost podman[249210]: 2025-10-14 09:44:44.248014533 +0000 UTC m=+0.139873340 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 05:44:44 localhost nova_compute[236479]: 2025-10-14 09:44:44.332 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:44:44 localhost nova_compute[236479]: 2025-10-14 09:44:44.340 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:44:44 localhost nova_compute[236479]: 2025-10-14 09:44:44.356 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:44:44 localhost nova_compute[236479]: 2025-10-14 09:44:44.359 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:44:44 localhost nova_compute[236479]: 2025-10-14 09:44:44.359 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:44:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.355 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.356 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.356 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.356 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.378 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.379 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.379 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.380 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.380 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.380 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:45 localhost nova_compute[236479]: 2025-10-14 09:44:45.381 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:44:45 localhost python3.9[249350]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:44:46 localhost nova_compute[236479]: 2025-10-14 09:44:46.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:44:46 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:46 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 14 05:44:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:46 localhost systemd[1]: libpod-conmon-328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.scope: Deactivated successfully. Oct 14 05:44:46 localhost systemd[1]: Started libpod-conmon-328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.scope. Oct 14 05:44:46 localhost podman[249351]: 2025-10-14 09:44:46.356160223 +0000 UTC m=+0.657696750 container exec 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:46 localhost podman[249351]: 2025-10-14 09:44:46.389202735 +0000 UTC m=+0.690739262 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:44:47 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:47 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4236 DF PROTO=TCP SPT=43630 DPT=9101 SEQ=579005353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761C9BE90000000001030307) Oct 14 05:44:47 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:48 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:44:48 localhost systemd[1]: var-lib-containers-storage-overlay-e33ddc8b42df498cb27b93c0db8d880bc0ea9bcace8f8f12bf0ae5fe30263323-merged.mount: Deactivated successfully. Oct 14 05:44:48 localhost podman[249380]: 2025-10-14 09:44:48.781911296 +0000 UTC m=+1.958077909 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:44:48 localhost podman[249380]: 2025-10-14 09:44:48.794240705 +0000 UTC m=+1.970407368 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent) Oct 14 05:44:49 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:44:49 localhost systemd[1]: var-lib-containers-storage-overlay-41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d-merged.mount: Deactivated successfully. Oct 14 05:44:49 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:49 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:49 localhost python3.9[249506]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:44:49 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:44:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:44:50 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:44:50 localhost systemd[1]: var-lib-containers-storage-overlay-9353b4c9b77a60c02d5cd3c8f9b94918c7a607156d2f7e1365b30ffe1fa49c89-merged.mount: Deactivated successfully. Oct 14 05:44:50 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:44:50 localhost python3.9[249616]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman Oct 14 05:44:50 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:44:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4237 DF PROTO=TCP SPT=43630 DPT=9101 SEQ=579005353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CABA90000000001030307) Oct 14 05:44:51 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 14 05:44:51 localhost systemd[1]: var-lib-containers-storage-overlay-94c8ed49a708b3cf7decc1af1486bf21a75d0bfa1928c9a829c7de69159b6ccb-merged.mount: Deactivated successfully. Oct 14 05:44:51 localhost systemd[1]: var-lib-containers-storage-overlay-41d6d78d48a59c2a92b7ebbd672b507950bf0a9c199b961ef8dec56e0bf4d10d-merged.mount: Deactivated successfully. Oct 14 05:44:52 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:44:52 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:44:52 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:44:52 localhost systemd[1]: libpod-conmon-328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.scope: Deactivated successfully. Oct 14 05:44:53 localhost python3.9[249737]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:44:53 localhost systemd[1]: Started libpod-conmon-6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.scope. Oct 14 05:44:53 localhost podman[249738]: 2025-10-14 09:44:53.581060294 +0000 UTC m=+0.107168036 container exec 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:44:53 localhost podman[249738]: 2025-10-14 09:44:53.593196137 +0000 UTC m=+0.119303909 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Oct 14 05:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:44:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59804 DF PROTO=TCP SPT=35658 DPT=9102 SEQ=692854187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CB53E0000000001030307) Oct 14 05:44:54 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:44:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59805 DF PROTO=TCP SPT=35658 DPT=9102 SEQ=692854187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CB9290000000001030307) Oct 14 05:44:55 localhost python3.9[249888]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:44:55 localhost systemd[1]: libpod-conmon-6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.scope: Deactivated successfully. Oct 14 05:44:55 localhost podman[246584]: time="2025-10-14T09:44:55Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged: invalid argument" Oct 14 05:44:55 localhost podman[246584]: time="2025-10-14T09:44:55Z" level=error msg="Getting root fs size for \"4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": creating overlay mount to /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/Z2VNKBZE3BJTC5EX26JLUJ6NNV,upperdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/diff,workdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/work,nodev,metacopy=on\": no such file or directory" Oct 14 05:44:55 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:55 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:44:55 localhost podman[249802]: 2025-10-14 09:44:55.541097442 +0000 UTC m=+0.875776285 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 05:44:55 localhost podman[249802]: 2025-10-14 09:44:55.574049873 +0000 UTC m=+0.908728666 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:44:55 localhost podman[249802]: unhealthy Oct 14 05:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:44:55 localhost systemd[1]: Started libpod-conmon-6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.scope. Oct 14 05:44:55 localhost podman[249889]: 2025-10-14 09:44:55.661795776 +0000 UTC m=+0.468954959 container exec 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent) Oct 14 05:44:55 localhost podman[249889]: 2025-10-14 09:44:55.667185845 +0000 UTC m=+0.474345068 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-09d529a5e87063d6d8be572e15ccc1a6e2cd4e03cf8d02224d51bfc8e004317f-merged.mount: Deactivated successfully. Oct 14 05:44:57 localhost systemd[1]: var-lib-containers-storage-overlay-09d529a5e87063d6d8be572e15ccc1a6e2cd4e03cf8d02224d51bfc8e004317f-merged.mount: Deactivated successfully. Oct 14 05:44:57 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:44:57 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:44:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:44:57.601 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:44:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:44:57.602 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:44:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:44:57.602 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:44:57 localhost python3.9[250037]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:44:58 localhost systemd[1]: libpod-conmon-6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.scope: Deactivated successfully. Oct 14 05:44:58 localhost python3.9[250148]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman Oct 14 05:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:44:58 localhost podman[250161]: 2025-10-14 09:44:58.75109321 +0000 UTC m=+0.096994363 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:44:58 localhost podman[250161]: 2025-10-14 09:44:58.758989304 +0000 UTC m=+0.104890467 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:44:58 localhost podman[250161]: unhealthy Oct 14 05:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113-merged.mount: Deactivated successfully. Oct 14 05:44:59 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:44:59 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:44:59 localhost systemd[1]: var-lib-containers-storage-overlay-1e508b56e5c4215a90f6b7ab87161275acbfc49ce32885eceeaa718ef9d09113-merged.mount: Deactivated successfully. Oct 14 05:44:59 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:44:59 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:44:59 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:44:59 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:45:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21680 DF PROTO=TCP SPT=41538 DPT=9100 SEQ=2374245630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CCF230000000001030307) Oct 14 05:45:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46350 DF PROTO=TCP SPT=32950 DPT=9882 SEQ=3203154239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CCFCA0000000001030307) Oct 14 05:45:00 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 14 05:45:00 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:45:01 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:45:01 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:01 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:01 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:01 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:01 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:01 localhost podman[246584]: time="2025-10-14T09:45:01Z" level=error msg="Getting root fs size for \"4975a6d92aca83780dadeaa32e5d3411b5b1b9a5cbb139d81a19555754b24402\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": creating overlay mount to /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/Z2VNKBZE3BJTC5EX26JLUJ6NNV,upperdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/diff,workdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/work,nodev,metacopy=on\": no such file or directory" Oct 14 05:45:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:02 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:45:02 localhost systemd[1]: var-lib-containers-storage-overlay-4cc5b6d664010750643235f3f70d195ea350655d57182e7e57ebfd557533d6a2-merged.mount: Deactivated successfully. Oct 14 05:45:02 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:02 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21682 DF PROTO=TCP SPT=41538 DPT=9100 SEQ=2374245630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CDB290000000001030307) Oct 14 05:45:03 localhost python3.9[250291]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:03 localhost systemd[1]: Started libpod-conmon-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.scope. Oct 14 05:45:03 localhost systemd[1]: tmp-crun.SmCKY1.mount: Deactivated successfully. Oct 14 05:45:03 localhost podman[250292]: 2025-10-14 09:45:03.708241664 +0000 UTC m=+0.111323873 container exec 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 05:45:03 localhost podman[250292]: 2025-10-14 09:45:03.740105496 +0000 UTC m=+0.143187665 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:45:04 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 14 05:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:45:04 localhost systemd[1]: var-lib-containers-storage-overlay-94c8ed49a708b3cf7decc1af1486bf21a75d0bfa1928c9a829c7de69159b6ccb-merged.mount: Deactivated successfully. Oct 14 05:45:04 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:45:05 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:05 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:05 localhost podman[250322]: 2025-10-14 09:45:05.355213876 +0000 UTC m=+1.308833539 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 05:45:05 localhost podman[250333]: 2025-10-14 09:45:05.39992016 +0000 UTC m=+0.398379160 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:45:05 localhost podman[250333]: 2025-10-14 09:45:05.414305631 +0000 UTC m=+0.412764681 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 05:45:05 localhost podman[250322]: 2025-10-14 09:45:05.470249894 +0000 UTC m=+1.423869547 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:45:05 localhost python3.9[250468]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:06 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: libpod-conmon-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.scope: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:45:06 localhost systemd[1]: Started libpod-conmon-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.scope. Oct 14 05:45:06 localhost podman[250469]: 2025-10-14 09:45:06.901844509 +0000 UTC m=+0.934453740 container exec 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid) Oct 14 05:45:06 localhost podman[250469]: 2025-10-14 09:45:06.934181993 +0000 UTC m=+0.966791254 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 14 05:45:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21683 DF PROTO=TCP SPT=41538 DPT=9100 SEQ=2374245630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CEAEA0000000001030307) Oct 14 05:45:08 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:45:08 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:45:08 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:08 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:09 localhost python3.9[250609]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:45:09 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:10 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:10 localhost systemd[1]: libpod-conmon-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.scope: Deactivated successfully. Oct 14 05:45:10 localhost python3.9[250719]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman Oct 14 05:45:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13211 DF PROTO=TCP SPT=50332 DPT=9105 SEQ=2466641566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761CF6290000000001030307) Oct 14 05:45:10 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:10 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:10 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:10 localhost podman[246584]: time="2025-10-14T09:45:10Z" level=error msg="Getting root fs size for \"4605c6b657557665a95c3d9b315bcf2722b5188b326f111055c3090da2d8bed6\": creating overlay mount to /var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/empty,upperdir=/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/diff,workdir=/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/work,nodev,metacopy=on\": no such file or directory" Oct 14 05:45:10 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:45:12 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:45:12 localhost systemd[1]: var-lib-containers-storage-overlay-09d529a5e87063d6d8be572e15ccc1a6e2cd4e03cf8d02224d51bfc8e004317f-merged.mount: Deactivated successfully. Oct 14 05:45:13 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:13 localhost systemd[1]: var-lib-containers-storage-overlay-4021d20142192293b753d5aa3904830cf887c958e51a03d916a4726fdc448e46-merged.mount: Deactivated successfully. Oct 14 05:45:13 localhost podman[250735]: 2025-10-14 09:45:13.427474049 +0000 UTC m=+1.951244102 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:45:13 localhost podman[250735]: 2025-10-14 09:45:13.455001279 +0000 UTC m=+1.978771332 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:45:13 localhost podman[250734]: 2025-10-14 09:45:13.468067927 +0000 UTC m=+1.995859215 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:45:13 localhost podman[250733]: 2025-10-14 09:45:13.544863468 +0000 UTC m=+2.063791887 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 05:45:13 localhost podman[250733]: 2025-10-14 09:45:13.556505688 +0000 UTC m=+2.075434097 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, architecture=x86_64, distribution-scope=public) Oct 14 05:45:13 localhost podman[250734]: 2025-10-14 09:45:13.576097213 +0000 UTC m=+2.103888532 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:45:14 localhost python3.9[250905]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47559 DF PROTO=TCP SPT=50640 DPT=9101 SEQ=2511811172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D05040000000001030307) Oct 14 05:45:14 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:14 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:45:14 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:45:14 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:45:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:14 localhost systemd[1]: Started libpod-conmon-6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.scope. Oct 14 05:45:14 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:45:14 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:45:14 localhost podman[250906]: 2025-10-14 09:45:14.81704026 +0000 UTC m=+0.661398425 container exec 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 05:45:14 localhost podman[250906]: 2025-10-14 09:45:14.846911221 +0000 UTC m=+0.691269326 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:45:15 localhost systemd[1]: var-lib-containers-storage-overlay-3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c-merged.mount: Deactivated successfully. Oct 14 05:45:15 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:15 localhost podman[246584]: time="2025-10-14T09:45:15Z" level=error msg="Getting root fs size for \"46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be\": getting diffsize of layer \"1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec\" and its parent \"0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c\": unmounting layer 1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec: replacing mount point \"/var/lib/containers/storage/overlay/1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec/merged\": device or resource busy" Oct 14 05:45:15 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:15 localhost systemd[1]: var-lib-containers-storage-overlay-56898ab6d39b47764ac69f563001cff1a6e38a16fd0080c65298dff54892d790-merged.mount: Deactivated successfully. Oct 14 05:45:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:16 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:16 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:16 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:17 localhost systemd[1]: libpod-conmon-6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.scope: Deactivated successfully. Oct 14 05:45:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47561 DF PROTO=TCP SPT=50640 DPT=9101 SEQ=2511811172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D112A0000000001030307) Oct 14 05:45:17 localhost systemd[1]: var-lib-containers-storage-overlay-9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997-merged.mount: Deactivated successfully. Oct 14 05:45:17 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:45:17 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:17 localhost python3.9[251044]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:17 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:17 localhost systemd[1]: Started libpod-conmon-6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.scope. Oct 14 05:45:17 localhost podman[251045]: 2025-10-14 09:45:17.689951841 +0000 UTC m=+0.094107979 container exec 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 05:45:17 localhost podman[251045]: 2025-10-14 09:45:17.718135078 +0000 UTC m=+0.122291246 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:45:18 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:45:18 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:18 localhost python3.9[251184]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:45:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:19 localhost systemd[1]: var-lib-containers-storage-overlay-3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c-merged.mount: Deactivated successfully. Oct 14 05:45:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:19 localhost systemd[1]: libpod-conmon-6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.scope: Deactivated successfully. Oct 14 05:45:19 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:45:19 localhost podman[251218]: 2025-10-14 09:45:19.830915527 +0000 UTC m=+0.089450719 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 05:45:19 localhost podman[251218]: 2025-10-14 09:45:19.866904916 +0000 UTC m=+0.125440038 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:45:20 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:20 localhost python3.9[251312]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman Oct 14 05:45:20 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:45:20 localhost systemd[1]: var-lib-containers-storage-overlay-4cc5b6d664010750643235f3f70d195ea350655d57182e7e57ebfd557533d6a2-merged.mount: Deactivated successfully. Oct 14 05:45:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47562 DF PROTO=TCP SPT=50640 DPT=9101 SEQ=2511811172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D20E90000000001030307) Oct 14 05:45:21 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:45:21 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:45:21 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:45:21 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:45:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26842 DF PROTO=TCP SPT=55532 DPT=9102 SEQ=2237294048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D2A6E0000000001030307) Oct 14 05:45:23 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:23 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:45:24 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:45:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:24 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26843 DF PROTO=TCP SPT=55532 DPT=9102 SEQ=2237294048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D2E690000000001030307) Oct 14 05:45:26 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:27 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:27 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:45:27 localhost podman[251414]: 2025-10-14 09:45:27.328989296 +0000 UTC m=+0.111165408 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Oct 14 05:45:27 localhost podman[251414]: 2025-10-14 09:45:27.359578596 +0000 UTC m=+0.141754678 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Oct 14 05:45:27 localhost podman[251414]: unhealthy Oct 14 05:45:27 localhost python3.9[251446]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:27 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:45:27 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Failed with result 'exit-code'. Oct 14 05:45:28 localhost systemd[1]: Started libpod-conmon-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope. Oct 14 05:45:28 localhost podman[251477]: 2025-10-14 09:45:28.015799447 +0000 UTC m=+0.519187937 container exec 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 05:45:28 localhost podman[251477]: 2025-10-14 09:45:28.044495097 +0000 UTC m=+0.547883597 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm) Oct 14 05:45:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:29 localhost python3.9[251658]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:29 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:45:29 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:29 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:29 localhost systemd[1]: libpod-conmon-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope: Deactivated successfully. Oct 14 05:45:29 localhost systemd[1]: Started libpod-conmon-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope. Oct 14 05:45:29 localhost podman[251659]: 2025-10-14 09:45:29.580767022 +0000 UTC m=+0.184221533 container exec 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:45:29 localhost podman[251667]: 2025-10-14 09:45:29.61209517 +0000 UTC m=+0.167113081 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:45:29 localhost podman[251667]: 2025-10-14 09:45:29.648758637 +0000 UTC m=+0.203776468 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:45:29 localhost podman[251667]: unhealthy Oct 14 05:45:29 localhost podman[251659]: 2025-10-14 09:45:29.660796047 +0000 UTC m=+0.264250538 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:45:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23294 DF PROTO=TCP SPT=51360 DPT=9100 SEQ=3906610443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D44540000000001030307) Oct 14 05:45:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49142 DF PROTO=TCP SPT=57214 DPT=9882 SEQ=63947678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D44F90000000001030307) Oct 14 05:45:31 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:45:31 localhost systemd[1]: var-lib-containers-storage-overlay-8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7-merged.mount: Deactivated successfully. Oct 14 05:45:31 localhost systemd[1]: var-lib-containers-storage-overlay-8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7-merged.mount: Deactivated successfully. Oct 14 05:45:31 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:45:31 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:45:31 localhost systemd[1]: libpod-conmon-59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.scope: Deactivated successfully. Oct 14 05:45:33 localhost python3.9[251839]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:45:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23296 DF PROTO=TCP SPT=51360 DPT=9100 SEQ=3906610443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D50690000000001030307) Oct 14 05:45:33 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:33 localhost systemd[1]: var-lib-containers-storage-overlay-4021d20142192293b753d5aa3904830cf887c958e51a03d916a4726fdc448e46-merged.mount: Deactivated successfully. Oct 14 05:45:33 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:34 localhost python3.9[251949]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman Oct 14 05:45:35 localhost systemd[1]: var-lib-containers-storage-overlay-56898ab6d39b47764ac69f563001cff1a6e38a16fd0080c65298dff54892d790-merged.mount: Deactivated successfully. Oct 14 05:45:35 localhost systemd[1]: var-lib-containers-storage-overlay-3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c-merged.mount: Deactivated successfully. Oct 14 05:45:35 localhost systemd[1]: var-lib-containers-storage-overlay-3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c-merged.mount: Deactivated successfully. Oct 14 05:45:36 localhost python3.9[252072]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:36 localhost systemd[1]: Started libpod-conmon-c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.scope. Oct 14 05:45:36 localhost systemd[1]: tmp-crun.n7a1T6.mount: Deactivated successfully. Oct 14 05:45:36 localhost podman[252073]: 2025-10-14 09:45:36.387965457 +0000 UTC m=+0.115476000 container exec c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:45:36 localhost podman[252073]: 2025-10-14 09:45:36.418111505 +0000 UTC m=+0.145622008 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:45:36 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:36 localhost systemd[1]: var-lib-containers-storage-overlay-56898ab6d39b47764ac69f563001cff1a6e38a16fd0080c65298dff54892d790-merged.mount: Deactivated successfully. Oct 14 05:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:45:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:37 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:37 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23297 DF PROTO=TCP SPT=51360 DPT=9100 SEQ=3906610443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D602A0000000001030307) Oct 14 05:45:37 localhost podman[252103]: 2025-10-14 09:45:37.634892898 +0000 UTC m=+0.703715537 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:45:37 localhost podman[252103]: 2025-10-14 09:45:37.643313665 +0000 UTC m=+0.712136334 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2) Oct 14 05:45:37 localhost podman[252104]: 2025-10-14 09:45:37.726808519 +0000 UTC m=+0.784823799 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:45:37 localhost podman[252104]: 2025-10-14 09:45:37.76712896 +0000 UTC m=+0.825144250 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:45:38 localhost python3.9[252251]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:38 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: libpod-conmon-c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.scope: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: Started libpod-conmon-c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.scope. Oct 14 05:45:38 localhost podman[252252]: 2025-10-14 09:45:38.775849875 +0000 UTC m=+0.547178809 container exec c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:45:38 localhost podman[252252]: 2025-10-14 09:45:38.812209583 +0000 UTC m=+0.583538537 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:45:38 localhost systemd[1]: var-lib-containers-storage-overlay-1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec-merged.mount: Deactivated successfully. Oct 14 05:45:38 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: libpod-conmon-c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.scope: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:40 localhost python3.9[252392]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:45:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52320 DF PROTO=TCP SPT=56140 DPT=9105 SEQ=2914965041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D6B690000000001030307) Oct 14 05:45:40 localhost python3.9[252502]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman Oct 14 05:45:40 localhost systemd[1]: var-lib-containers-storage-overlay-3a5231add129a89d0adead7ab11bea3dfa286b532e456cc25a1ad81207e8880c-merged.mount: Deactivated successfully. Oct 14 05:45:42 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:42 localhost systemd[1]: var-lib-containers-storage-overlay-496ac8ae1b781159d9732cba668aefff9d4a69111a9ec162f48ec47befb2b47b-merged.mount: Deactivated successfully. Oct 14 05:45:43 localhost nova_compute[236479]: 2025-10-14 09:45:43.161 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:44 localhost nova_compute[236479]: 2025-10-14 09:45:44.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:44 localhost nova_compute[236479]: 2025-10-14 09:45:44.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61891 DF PROTO=TCP SPT=44024 DPT=9101 SEQ=721515747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D7A330000000001030307) Oct 14 05:45:44 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:45:44 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:45:44 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.178 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.178 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.179 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.196 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.196 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.196 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.197 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.197 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:45:45 localhost podman[252516]: 2025-10-14 09:45:45.311912069 +0000 UTC m=+0.376728926 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public) Oct 14 05:45:45 localhost podman[252518]: 2025-10-14 09:45:45.357909544 +0000 UTC m=+0.415837332 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:45:45 localhost podman[252516]: 2025-10-14 09:45:45.409251307 +0000 UTC m=+0.474068164 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9) Oct 14 05:45:45 localhost podman[252518]: 2025-10-14 09:45:45.44202702 +0000 UTC m=+0.499954818 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:45:45 localhost podman[252517]: 2025-10-14 09:45:45.412832349 +0000 UTC m=+0.471167058 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:45:45 localhost podman[252517]: 2025-10-14 09:45:45.496262018 +0000 UTC m=+0.554596727 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:45:45 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.696 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.846 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.847 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13021MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.847 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.848 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.922 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.922 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:45:45 localhost nova_compute[236479]: 2025-10-14 09:45:45.947 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:45:45 localhost python3.9[252710]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:46 localhost nova_compute[236479]: 2025-10-14 09:45:46.409 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:45:46 localhost nova_compute[236479]: 2025-10-14 09:45:46.415 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:45:46 localhost nova_compute[236479]: 2025-10-14 09:45:46.449 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:45:46 localhost nova_compute[236479]: 2025-10-14 09:45:46.452 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:45:46 localhost nova_compute[236479]: 2025-10-14 09:45:46.452 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:45:47 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:47 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61893 DF PROTO=TCP SPT=44024 DPT=9101 SEQ=721515747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D86290000000001030307) Oct 14 05:45:47 localhost nova_compute[236479]: 2025-10-14 09:45:47.438 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:47 localhost nova_compute[236479]: 2025-10-14 09:45:47.438 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:47 localhost nova_compute[236479]: 2025-10-14 09:45:47.439 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:47 localhost nova_compute[236479]: 2025-10-14 09:45:47.439 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:45:47 localhost nova_compute[236479]: 2025-10-14 09:45:47.439 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:45:47 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:47 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:45:47 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:45:47 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:45:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:47 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:45:47 localhost systemd[1]: Started libpod-conmon-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.scope. Oct 14 05:45:47 localhost podman[252712]: 2025-10-14 09:45:47.751674379 +0000 UTC m=+1.754977853 container exec fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:45:47 localhost podman[252712]: 2025-10-14 09:45:47.786434884 +0000 UTC m=+1.789738328 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:45:48 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:48 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:49 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:49 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:49 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:45:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:50 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:50 localhost python3.9[252872]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:45:50 localhost podman[246584]: time="2025-10-14T09:45:50Z" level=error msg="Getting root fs size for \"5fa3c1ddc2e7992f06d290f79c1e4f9d82948081ec8753bedbef84e87f1c41c4\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 14 05:45:50 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:50 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:45:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:45:50 localhost systemd[1]: libpod-conmon-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.scope: Deactivated successfully. Oct 14 05:45:50 localhost systemd[1]: Started libpod-conmon-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.scope. Oct 14 05:45:50 localhost podman[252873]: 2025-10-14 09:45:50.996301888 +0000 UTC m=+0.466586179 container exec fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:45:51 localhost podman[252873]: 2025-10-14 09:45:51.028279382 +0000 UTC m=+0.498563673 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:45:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61894 DF PROTO=TCP SPT=44024 DPT=9101 SEQ=721515747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D95E90000000001030307) Oct 14 05:45:51 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:51 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:51 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:45:53 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:45:53 localhost systemd[1]: var-lib-containers-storage-overlay-eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385-merged.mount: Deactivated successfully. Oct 14 05:45:53 localhost podman[252902]: 2025-10-14 09:45:53.630782953 +0000 UTC m=+1.773320936 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0) Oct 14 05:45:53 localhost podman[252902]: 2025-10-14 09:45:53.666261147 +0000 UTC m=+1.808799190 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 14 05:45:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46034 DF PROTO=TCP SPT=58890 DPT=9102 SEQ=2231059188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761D9F9E0000000001030307) Oct 14 05:45:54 localhost systemd[1]: libpod-conmon-fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.scope: Deactivated successfully. Oct 14 05:45:54 localhost python3.9[253027]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:45:54 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:45:54 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:45:54 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:54 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:45:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46035 DF PROTO=TCP SPT=58890 DPT=9102 SEQ=2231059188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DA3A90000000001030307) Oct 14 05:45:54 localhost python3.9[253137]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman Oct 14 05:45:55 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:45:55 localhost systemd[1]: var-lib-containers-storage-overlay-8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7-merged.mount: Deactivated successfully. Oct 14 05:45:56 localhost systemd[1]: var-lib-containers-storage-overlay-8d62222e8be5ac5f7261ca7d31d843da4ab3033140a4b9bae53a55e69f471cf7-merged.mount: Deactivated successfully. Oct 14 05:45:56 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:45:56 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:45:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:45:57.602 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:45:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:45:57.603 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:45:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:45:57.603 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:45:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:45:58 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:45:58 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:58 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:45:58 localhost podman[253151]: 2025-10-14 09:45:58.902997696 +0000 UTC m=+0.444599042 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:45:58 localhost podman[253151]: 2025-10-14 09:45:58.915037727 +0000 UTC m=+0.456639073 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:45:59 localhost systemd[1]: var-lib-containers-storage-overlay-b229675e52e0150c8f53be2f60bdcd02e09cc9ac91e9d7513ccf836c4fc95815-merged.mount: Deactivated successfully. Oct 14 05:46:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13112 DF PROTO=TCP SPT=57696 DPT=9100 SEQ=2590838449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DB9840000000001030307) Oct 14 05:46:00 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:00 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37619 DF PROTO=TCP SPT=53582 DPT=9882 SEQ=2658299360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DBA290000000001030307) Oct 14 05:46:00 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:00 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:46:01 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:46:01 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:01 localhost podman[253202]: 2025-10-14 09:46:01.828911717 +0000 UTC m=+0.104928234 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:46:01 localhost podman[253202]: 2025-10-14 09:46:01.8379715 +0000 UTC m=+0.113988047 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:46:01 localhost podman[253202]: unhealthy Oct 14 05:46:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:02 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:46:02 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:46:02 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:02 localhost python3.9[253301]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:46:02 localhost systemd[1]: Started libpod-conmon-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.scope. Oct 14 05:46:02 localhost podman[253302]: 2025-10-14 09:46:02.81287477 +0000 UTC m=+0.088722676 container exec 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal) Oct 14 05:46:02 localhost podman[253302]: 2025-10-14 09:46:02.846173518 +0000 UTC m=+0.122021404 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Oct 14 05:46:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:03 localhost systemd[1]: var-lib-containers-storage-overlay-21837a037040259e69cb40b47a6715b197d579cd205243ce8d40aaf45d9a6d8f-merged.mount: Deactivated successfully. Oct 14 05:46:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13114 DF PROTO=TCP SPT=57696 DPT=9100 SEQ=2590838449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DC5A90000000001030307) Oct 14 05:46:03 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:03 localhost auditd[726]: Audit daemon rotating log files Oct 14 05:46:03 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:04 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:04 localhost python3.9[253441]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 14 05:46:04 localhost systemd[1]: libpod-conmon-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.scope: Deactivated successfully. Oct 14 05:46:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:04 localhost systemd[1]: Started libpod-conmon-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.scope. Oct 14 05:46:04 localhost podman[253442]: 2025-10-14 09:46:04.332542162 +0000 UTC m=+0.114914501 container exec 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, version=9.6) Oct 14 05:46:04 localhost podman[253442]: 2025-10-14 09:46:04.340076265 +0000 UTC m=+0.122448604 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container) Oct 14 05:46:04 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:04 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:04 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:05 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:06 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:46:06 localhost systemd[1]: var-lib-containers-storage-overlay-496ac8ae1b781159d9732cba668aefff9d4a69111a9ec162f48ec47befb2b47b-merged.mount: Deactivated successfully. Oct 14 05:46:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:07 localhost python3.9[253580]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:46:07 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13115 DF PROTO=TCP SPT=57696 DPT=9100 SEQ=2590838449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DD5690000000001030307) Oct 14 05:46:09 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:46:09 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:09 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:09 localhost systemd[1]: libpod-conmon-306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.scope: Deactivated successfully. Oct 14 05:46:09 localhost podman[253598]: 2025-10-14 09:46:09.373002775 +0000 UTC m=+0.335167734 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:46:09 localhost podman[253598]: 2025-10-14 09:46:09.381077794 +0000 UTC m=+0.343242773 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:46:09 localhost podman[253599]: 2025-10-14 09:46:09.423439914 +0000 UTC m=+0.385033078 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:46:09 localhost podman[253599]: 2025-10-14 09:46:09.434946681 +0000 UTC m=+0.396539855 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:46:10 localhost systemd[1]: var-lib-containers-storage-overlay-8c5b531ba48535632b40540aa07cee707004fde63b53fdfb79d721331dbc1eb8-merged.mount: Deactivated successfully. Oct 14 05:46:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54111 DF PROTO=TCP SPT=33822 DPT=9105 SEQ=1166240014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DE0A90000000001030307) Oct 14 05:46:11 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:11 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:11 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:11 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:46:11 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:46:12 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:12 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:12 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:13 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:13 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:13 localhost systemd[1]: var-lib-containers-storage-overlay-9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9-merged.mount: Deactivated successfully. Oct 14 05:46:13 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17891 DF PROTO=TCP SPT=49686 DPT=9101 SEQ=2725360464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DEF630000000001030307) Oct 14 05:46:15 localhost systemd[1]: var-lib-containers-storage-overlay-a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534-merged.mount: Deactivated successfully. Oct 14 05:46:15 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:15 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:16 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:16 localhost systemd[1]: var-lib-containers-storage-overlay-eef6f67dbcc4716427993a35dbc0e8cbdc2c029ffea4f262857224976d1c5385-merged.mount: Deactivated successfully. Oct 14 05:46:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17893 DF PROTO=TCP SPT=49686 DPT=9101 SEQ=2725360464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761DFB690000000001030307) Oct 14 05:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c-merged.mount: Deactivated successfully. Oct 14 05:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-b229675e52e0150c8f53be2f60bdcd02e09cc9ac91e9d7513ccf836c4fc95815-merged.mount: Deactivated successfully. Oct 14 05:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534-merged.mount: Deactivated successfully. Oct 14 05:46:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:46:18 localhost podman[253634]: 2025-10-14 09:46:18.295570758 +0000 UTC m=+0.085289688 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc.) Oct 14 05:46:18 localhost podman[253635]: 2025-10-14 09:46:18.304957639 +0000 UTC m=+0.091197859 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:46:18 localhost podman[253634]: 2025-10-14 09:46:18.332447388 +0000 UTC m=+0.122166248 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6) Oct 14 05:46:18 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:18 localhost podman[253636]: 2025-10-14 09:46:18.352897574 +0000 UTC m=+0.135658195 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:46:18 localhost podman[253636]: 2025-10-14 09:46:18.36049055 +0000 UTC m=+0.143251191 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:46:18 localhost podman[253635]: 2025-10-14 09:46:18.414891261 +0000 UTC m=+0.201131481 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller) Oct 14 05:46:18 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:46:18 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:46:19 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:19 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:20 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:20 localhost systemd[1]: var-lib-containers-storage-overlay-21837a037040259e69cb40b47a6715b197d579cd205243ce8d40aaf45d9a6d8f-merged.mount: Deactivated successfully. Oct 14 05:46:20 localhost systemd[1]: var-lib-containers-storage-overlay-21837a037040259e69cb40b47a6715b197d579cd205243ce8d40aaf45d9a6d8f-merged.mount: Deactivated successfully. Oct 14 05:46:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17894 DF PROTO=TCP SPT=49686 DPT=9101 SEQ=2725360464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E0B290000000001030307) Oct 14 05:46:21 localhost systemd[1]: var-lib-containers-storage-overlay-9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9-merged.mount: Deactivated successfully. Oct 14 05:46:21 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:21 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:21 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:22 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:22 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-8c5b531ba48535632b40540aa07cee707004fde63b53fdfb79d721331dbc1eb8-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4943 DF PROTO=TCP SPT=39388 DPT=9102 SEQ=321807873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E14CD0000000001030307) Oct 14 05:46:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4944 DF PROTO=TCP SPT=39388 DPT=9102 SEQ=321807873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E18EA0000000001030307) Oct 14 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9-merged.mount: Deactivated successfully. Oct 14 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:46:26 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:27 localhost systemd[1]: var-lib-containers-storage-overlay-a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534-merged.mount: Deactivated successfully. Oct 14 05:46:27 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:27 localhost systemd[1]: var-lib-containers-storage-overlay-3d32571c90c517218e75b400153bfe2946f348989aeee2613f1e17f32183ce41-merged.mount: Deactivated successfully. Oct 14 05:46:27 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:28 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:28 localhost systemd[1]: var-lib-containers-storage-overlay-a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534-merged.mount: Deactivated successfully. Oct 14 05:46:29 localhost systemd[1]: var-lib-containers-storage-overlay-a10bb81cada1063fdd09337579a73ba5c07dabd1b81c2bfe70924b91722bf534-merged.mount: Deactivated successfully. Oct 14 05:46:29 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:29 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48765 DF PROTO=TCP SPT=37698 DPT=9100 SEQ=839069506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E2EB30000000001030307) Oct 14 05:46:30 localhost systemd[1]: var-lib-containers-storage-overlay-30fc8906bf3c4dddee8af1b0fb71de2370697abfd1d45bc721251a95c39f5658-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54725 DF PROTO=TCP SPT=44388 DPT=9882 SEQ=2799558428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E2F5A0000000001030307) Oct 14 05:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:46:31 localhost podman[253709]: 2025-10-14 09:46:31.298437614 +0000 UTC m=+0.090258486 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:46:31 localhost podman[253709]: 2025-10-14 09:46:31.339265366 +0000 UTC m=+0.131086278 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:46:31 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:31 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:46:31 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:46:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:46:32 localhost systemd[1]: var-lib-containers-storage-overlay-9dce2160573984ba54f17e563b839daf8c243479b9d2f49c1195fe30690bd2c9-merged.mount: Deactivated successfully. Oct 14 05:46:32 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:32 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:32 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:46:32 localhost podman[253697]: 2025-10-14 09:46:32.615137167 +0000 UTC m=+6.405766019 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Oct 14 05:46:32 localhost podman[253697]: 2025-10-14 09:46:32.624118199 +0000 UTC m=+6.414747021 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Oct 14 05:46:32 localhost podman[253727]: 2025-10-14 09:46:32.665541076 +0000 UTC m=+0.201929162 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:46:32 localhost podman[253727]: 2025-10-14 09:46:32.705083995 +0000 UTC m=+0.241472051 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:46:32 localhost podman[253727]: unhealthy Oct 14 05:46:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48767 DF PROTO=TCP SPT=37698 DPT=9100 SEQ=839069506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E3AAA0000000001030307) Oct 14 05:46:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:33 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:34 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:34 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:35 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:35 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:46:35 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:46:35 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:46:35 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:36 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:36 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:37 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48768 DF PROTO=TCP SPT=37698 DPT=9100 SEQ=839069506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E4A690000000001030307) Oct 14 05:46:38 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:46:38 localhost systemd[1]: var-lib-containers-storage-overlay-281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48-merged.mount: Deactivated successfully. Oct 14 05:46:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:38 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:40 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15980 DF PROTO=TCP SPT=38636 DPT=9105 SEQ=241457133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E55A90000000001030307) Oct 14 05:46:41 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:46:41 localhost systemd[1]: var-lib-containers-storage-overlay-74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663-merged.mount: Deactivated successfully. Oct 14 05:46:41 localhost systemd[1]: var-lib-containers-storage-overlay-74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663-merged.mount: Deactivated successfully. Oct 14 05:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:46:41 localhost podman[253894]: 2025-10-14 09:46:41.804359029 +0000 UTC m=+0.094379672 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Oct 14 05:46:41 localhost podman[253893]: 2025-10-14 09:46:41.841881935 +0000 UTC m=+0.133183221 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:46:41 localhost podman[253894]: 2025-10-14 09:46:41.865157614 +0000 UTC m=+0.155178187 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:46:41 localhost podman[253893]: 2025-10-14 09:46:41.880074859 +0000 UTC m=+0.171376145 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0) Oct 14 05:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.192 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.193 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.193 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 05:46:42 localhost nova_compute[236479]: 2025-10-14 09:46:42.209 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:42 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:46:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:42 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-30fc8906bf3c4dddee8af1b0fb71de2370697abfd1d45bc721251a95c39f5658-merged.mount: Deactivated successfully. Oct 14 05:46:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:43 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:43 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:46:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:46:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:44 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56309 DF PROTO=TCP SPT=39182 DPT=9101 SEQ=3251075919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E64930000000001030307) Oct 14 05:46:44 localhost nova_compute[236479]: 2025-10-14 09:46:44.217 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:44 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 14 05:46:44 localhost systemd[1]: var-lib-containers-storage-overlay-281ca1da1ab069caad01e829a4964fddd64aa8e87753481a91e721f1c1dc7f48-merged.mount: Deactivated successfully. Oct 14 05:46:44 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:46:44 localhost systemd[1]: var-lib-containers-storage-overlay-8663d2c3d5618f36fce8356c62a3252481fa61416414a2be1734fcb387a75a33-merged.mount: Deactivated successfully. Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.198 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.198 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.199 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.199 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.199 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:46:45 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:45 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:46:45 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.674 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.862 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.864 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13012MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.864 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:46:45 localhost nova_compute[236479]: 2025-10-14 09:46:45.865 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.146 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.147 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.281 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.424 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.425 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.442 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.462 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NODE,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.476 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.935 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.941 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.962 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.965 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:46:46 localhost nova_compute[236479]: 2025-10-14 09:46:46.965 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:46:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56311 DF PROTO=TCP SPT=39182 DPT=9101 SEQ=3251075919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E70AA0000000001030307) Oct 14 05:46:47 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 14 05:46:47 localhost systemd[1]: var-lib-containers-storage-overlay-74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663-merged.mount: Deactivated successfully. Oct 14 05:46:47 localhost systemd[1]: var-lib-containers-storage-overlay-74500a46616905488a2d34409fc38428e7baca36003522cc9b6c6fef05025663-merged.mount: Deactivated successfully. Oct 14 05:46:47 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:47 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:47 localhost nova_compute[236479]: 2025-10-14 09:46:47.964 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:47 localhost nova_compute[236479]: 2025-10-14 09:46:47.965 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:46:47 localhost nova_compute[236479]: 2025-10-14 09:46:47.965 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.013 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.014 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.014 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.015 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.015 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.016 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:46:48 localhost nova_compute[236479]: 2025-10-14 09:46:48.016 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:46:48 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:48 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:46:48 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:48 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:49 localhost podman[253974]: 2025-10-14 09:46:49.012752961 +0000 UTC m=+0.070231620 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Oct 14 05:46:49 localhost podman[253975]: 2025-10-14 09:46:49.045578647 +0000 UTC m=+0.094731752 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:46:49 localhost podman[253974]: 2025-10-14 09:46:49.055083041 +0000 UTC m=+0.112561740 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Oct 14 05:46:49 localhost podman[253975]: 2025-10-14 09:46:49.078110824 +0000 UTC m=+0.127263939 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:46:49 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:46:49 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:46:49 localhost podman[253981]: 2025-10-14 09:46:49.201561003 +0000 UTC m=+0.246526280 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:46:49 localhost podman[253981]: 2025-10-14 09:46:49.235080967 +0000 UTC m=+0.280046234 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:46:49 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:46:49 localhost systemd[1]: tmp-crun.lWIi64.mount: Deactivated successfully. Oct 14 05:46:49 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:49 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:46:49 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:46:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:46:50 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:50 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:50 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:46:50 localhost systemd[1]: var-lib-containers-storage-overlay-8663d2c3d5618f36fce8356c62a3252481fa61416414a2be1734fcb387a75a33-merged.mount: Deactivated successfully. Oct 14 05:46:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56312 DF PROTO=TCP SPT=39182 DPT=9101 SEQ=3251075919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E80690000000001030307) Oct 14 05:46:52 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:46:52 localhost systemd[1]: var-lib-containers-storage-overlay-02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01-merged.mount: Deactivated successfully. Oct 14 05:46:52 localhost systemd[1]: var-lib-containers-storage-overlay-02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01-merged.mount: Deactivated successfully. Oct 14 05:46:53 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:46:53 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:46:53 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:46:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2866 DF PROTO=TCP SPT=34614 DPT=9102 SEQ=2338247477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E89FE0000000001030307) Oct 14 05:46:54 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:54 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:46:54 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:46:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2867 DF PROTO=TCP SPT=34614 DPT=9102 SEQ=2338247477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761E8DE90000000001030307) Oct 14 05:46:55 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:46:56 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:56 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:56 localhost systemd[1]: var-lib-containers-storage-overlay-512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36-merged.mount: Deactivated successfully. Oct 14 05:46:57 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:57 localhost systemd[1]: var-lib-containers-storage-overlay-0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861-merged.mount: Deactivated successfully. Oct 14 05:46:57 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:46:57.603 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:46:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:46:57.603 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:46:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:46:57.603 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:46:58 localhost systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully. Oct 14 05:46:58 localhost systemd[1]: var-lib-containers-storage-overlay-ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a-merged.mount: Deactivated successfully. Oct 14 05:46:58 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27627 DF PROTO=TCP SPT=42490 DPT=9100 SEQ=4192678916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EA3E40000000001030307) Oct 14 05:47:00 localhost systemd[1]: var-lib-containers-storage-overlay-5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe-merged.mount: Deactivated successfully. Oct 14 05:47:00 localhost systemd[1]: var-lib-containers-storage-overlay-02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01-merged.mount: Deactivated successfully. Oct 14 05:47:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5346 DF PROTO=TCP SPT=60412 DPT=9882 SEQ=2138511332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EA48A0000000001030307) Oct 14 05:47:00 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:00 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:01 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:01 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:47:01 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:47:01 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:47:02 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:02 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:02 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:47:02 localhost podman[254040]: 2025-10-14 09:47:02.784497501 +0000 UTC m=+0.103697712 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 05:47:02 localhost podman[254040]: 2025-10-14 09:47:02.794278362 +0000 UTC m=+0.113478573 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Oct 14 05:47:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27629 DF PROTO=TCP SPT=42490 DPT=9100 SEQ=4192678916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EAFE90000000001030307) Oct 14 05:47:04 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:04 localhost systemd[1]: var-lib-containers-storage-overlay-2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b-merged.mount: Deactivated successfully. Oct 14 05:47:04 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:47:05 localhost systemd[1]: tmp-crun.XIJXkW.mount: Deactivated successfully. Oct 14 05:47:05 localhost podman[254058]: 2025-10-14 09:47:05.571426982 +0000 UTC m=+0.106925115 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 05:47:05 localhost podman[254058]: 2025-10-14 09:47:05.579373707 +0000 UTC m=+0.114871790 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:47:05 localhost podman[254059]: 2025-10-14 09:47:05.640018148 +0000 UTC m=+0.172738919 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:47:05 localhost podman[254059]: 2025-10-14 09:47:05.721134617 +0000 UTC m=+0.253855368 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:47:05 localhost podman[254059]: unhealthy Oct 14 05:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:07 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:07 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:07 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:47:07 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:47:07 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:47:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27630 DF PROTO=TCP SPT=42490 DPT=9100 SEQ=4192678916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EBFA90000000001030307) Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:09 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:09 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:10 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:47:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29720 DF PROTO=TCP SPT=52154 DPT=9105 SEQ=1520264888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761ECAE90000000001030307) Oct 14 05:47:10 localhost systemd[1]: var-lib-containers-storage-overlay-dbb4b39932e5609ba5ee4a2613d186c3370bc3d5edae8823aacfc98bedd90a72-merged.mount: Deactivated successfully. Oct 14 05:47:10 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:10 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:47:12 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:12 localhost systemd[1]: tmp-crun.jBjSDG.mount: Deactivated successfully. Oct 14 05:47:12 localhost podman[254097]: 2025-10-14 09:47:12.569875286 +0000 UTC m=+0.105042927 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:47:12 localhost podman[254097]: 2025-10-14 09:47:12.579005421 +0000 UTC m=+0.114173082 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 05:47:12 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:12 localhost podman[254096]: 2025-10-14 09:47:12.655282515 +0000 UTC m=+0.189299306 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:47:12 localhost podman[254096]: 2025-10-14 09:47:12.708202229 +0000 UTC m=+0.242218990 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:47:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37407 DF PROTO=TCP SPT=35602 DPT=9101 SEQ=4219441378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761ED9C20000000001030307) Oct 14 05:47:14 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:14 localhost systemd[1]: var-lib-containers-storage-overlay-2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b-merged.mount: Deactivated successfully. Oct 14 05:47:14 localhost systemd[1]: var-lib-containers-storage-overlay-2907680d146b3ac52bd167b30a8c95c31d3d501236d96d25e118eb29f3ddf43b-merged.mount: Deactivated successfully. Oct 14 05:47:14 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:47:14 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:47:15 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:47:15 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:47:15 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 14 05:47:16 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 14 05:47:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:16 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37409 DF PROTO=TCP SPT=35602 DPT=9101 SEQ=4219441378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EE5E90000000001030307) Oct 14 05:47:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:17 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:17 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:18 localhost systemd[1]: var-lib-containers-storage-overlay-dbb4b39932e5609ba5ee4a2613d186c3370bc3d5edae8823aacfc98bedd90a72-merged.mount: Deactivated successfully. Oct 14 05:47:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:18 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:47:19 localhost podman[254134]: 2025-10-14 09:47:19.54036464 +0000 UTC m=+0.073864323 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 14 05:47:19 localhost podman[254133]: 2025-10-14 09:47:19.512868042 +0000 UTC m=+0.053844958 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public) Oct 14 05:47:19 localhost podman[254133]: 2025-10-14 09:47:19.594432634 +0000 UTC m=+0.135409570 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter) Oct 14 05:47:19 localhost podman[254134]: 2025-10-14 09:47:19.606175766 +0000 UTC m=+0.139675489 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 05:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:47:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37410 DF PROTO=TCP SPT=35602 DPT=9101 SEQ=4219441378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EF5A90000000001030307) Oct 14 05:47:21 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:21 localhost systemd[1]: var-lib-containers-storage-overlay-5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde-merged.mount: Deactivated successfully. Oct 14 05:47:21 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:47:21 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:47:21 localhost podman[254174]: 2025-10-14 09:47:21.670496435 +0000 UTC m=+1.215944729 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:47:21 localhost podman[254174]: 2025-10-14 09:47:21.720219985 +0000 UTC m=+1.265668309 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:47:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12700 DF PROTO=TCP SPT=33604 DPT=9102 SEQ=761762056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761EFF2E0000000001030307) Oct 14 05:47:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:24 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:24 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:24 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:24 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:47:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12701 DF PROTO=TCP SPT=33604 DPT=9102 SEQ=761762056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F03290000000001030307) Oct 14 05:47:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:26 localhost podman[246584]: time="2025-10-14T09:47:26Z" level=error msg="Getting root fs size for \"b3e743117a320dca1d8b49f7d97ef7a2c5ae0d3ee14d9828f444bf98d7785433\": getting diffsize of layer \"948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer 948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca: replacing mount point \"/var/lib/containers/storage/overlay/948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca/merged\": device or resource busy" Oct 14 05:47:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:27 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:27 localhost systemd[1]: session-57.scope: Deactivated successfully. Oct 14 05:47:27 localhost systemd[1]: session-57.scope: Consumed 1min 18.222s CPU time. Oct 14 05:47:27 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Oct 14 05:47:27 localhost systemd-logind[760]: Removed session 57. Oct 14 05:47:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:28 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:28 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:29 localhost systemd[1]: var-lib-containers-storage-overlay-5428cfc209a0b726e8715c5a10b80ebeaeeb6cfb27b6ebd4c94e6f6214613fde-merged.mount: Deactivated successfully. Oct 14 05:47:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24325 DF PROTO=TCP SPT=56842 DPT=9100 SEQ=493483629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F19140000000001030307) Oct 14 05:47:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15280 DF PROTO=TCP SPT=52074 DPT=9882 SEQ=3849666615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F19BA0000000001030307) Oct 14 05:47:31 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:31 localhost systemd[1]: var-lib-containers-storage-overlay-1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d-merged.mount: Deactivated successfully. Oct 14 05:47:31 localhost systemd[1]: var-lib-containers-storage-overlay-1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d-merged.mount: Deactivated successfully. Oct 14 05:47:32 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:32 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:32 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24327 DF PROTO=TCP SPT=56842 DPT=9100 SEQ=493483629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F25290000000001030307) Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e-merged.mount: Deactivated successfully. Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Oct 14 05:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 14 05:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 14 05:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:35 localhost podman[254201]: 2025-10-14 09:47:35.306834747 +0000 UTC m=+0.098460767 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3) Oct 14 05:47:35 localhost podman[254201]: 2025-10-14 09:47:35.312050211 +0000 UTC m=+0.103676311 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:47:36 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:36 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:36 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:36 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:47:37 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:47:37 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24328 DF PROTO=TCP SPT=56842 DPT=9100 SEQ=493483629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F34E90000000001030307) Oct 14 05:47:37 localhost podman[254221]: 2025-10-14 09:47:37.560391611 +0000 UTC m=+0.101016333 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:47:37 localhost podman[254221]: 2025-10-14 09:47:37.598407609 +0000 UTC m=+0.139032291 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0) Oct 14 05:47:37 localhost podman[254222]: 2025-10-14 09:47:37.614848273 +0000 UTC m=+0.151323369 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:47:37 localhost podman[254222]: 2025-10-14 09:47:37.646603451 +0000 UTC m=+0.183078577 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:47:37 localhost podman[254222]: unhealthy Oct 14 05:47:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:37 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:47:37 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:47:37 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:47:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:38 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:47:38 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:47:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61166 DF PROTO=TCP SPT=60336 DPT=9105 SEQ=1254015079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F40290000000001030307) Oct 14 05:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d-merged.mount: Deactivated successfully. Oct 14 05:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-1b9d6a6189040853d18c1b25b60fe5e20e54845d3f8eb5e145d9272c6a19c97d-merged.mount: Deactivated successfully. Oct 14 05:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:44 localhost nova_compute[236479]: 2025-10-14 09:47:44.211 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4087 DF PROTO=TCP SPT=55490 DPT=9101 SEQ=1546666915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F4EF30000000001030307) Oct 14 05:47:44 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 14 05:47:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:47:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e-merged.mount: Deactivated successfully. Oct 14 05:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-7ab4a314da1a4f576142ebf117938164a5edfd56bd6085edc385b152e23dd08e-merged.mount: Deactivated successfully. Oct 14 05:47:45 localhost podman[254349]: 2025-10-14 09:47:45.139174882 +0000 UTC m=+0.087296849 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:45 localhost podman[254349]: 2025-10-14 09:47:45.178131146 +0000 UTC m=+0.126253113 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.license=GPLv2) Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.186 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.187 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.187 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.187 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.188 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:47:45 localhost podman[254348]: 2025-10-14 09:47:45.200071821 +0000 UTC m=+0.149724698 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, container_name=iscsid) Oct 14 05:47:45 localhost podman[254348]: 2025-10-14 09:47:45.239591469 +0000 UTC m=+0.189244346 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.697 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.890 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.892 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12970MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.892 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.893 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.988 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:47:45 localhost nova_compute[236479]: 2025-10-14 09:47:45.989 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.014 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.469 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.475 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.495 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.498 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:47:46 localhost nova_compute[236479]: 2025-10-14 09:47:46.499 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:47:46 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:47:46 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8-merged.mount: Deactivated successfully. Oct 14 05:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Oct 14 05:47:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4089 DF PROTO=TCP SPT=55490 DPT=9101 SEQ=1546666915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F5AE90000000001030307) Oct 14 05:47:47 localhost nova_compute[236479]: 2025-10-14 09:47:47.493 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:47 localhost nova_compute[236479]: 2025-10-14 09:47:47.512 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:47 localhost nova_compute[236479]: 2025-10-14 09:47:47.513 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:47 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:47 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:47 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.187 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.187 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:48 localhost nova_compute[236479]: 2025-10-14 09:47:48.188 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:49 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:49 localhost nova_compute[236479]: 2025-10-14 09:47:49.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:49 localhost nova_compute[236479]: 2025-10-14 09:47:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:47:49 localhost nova_compute[236479]: 2025-10-14 09:47:49.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:47:49 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:49 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 14 05:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:47:51 localhost systemd[1]: var-lib-containers-storage-overlay-8fb3dc6bf81a95cfcd70e4022b330b89375474ef10a51fbbe80fad5539619909-merged.mount: Deactivated successfully. Oct 14 05:47:51 localhost systemd[1]: var-lib-containers-storage-overlay-8fb3dc6bf81a95cfcd70e4022b330b89375474ef10a51fbbe80fad5539619909-merged.mount: Deactivated successfully. Oct 14 05:47:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4090 DF PROTO=TCP SPT=55490 DPT=9101 SEQ=1546666915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F6AA90000000001030307) Oct 14 05:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:47:52 localhost systemd[1]: tmp-crun.5WGnPp.mount: Deactivated successfully. Oct 14 05:47:52 localhost podman[254429]: 2025-10-14 09:47:52.551415134 +0000 UTC m=+0.097282607 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=) Oct 14 05:47:52 localhost podman[254429]: 2025-10-14 09:47:52.561233318 +0000 UTC m=+0.107100811 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc.) Oct 14 05:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12181 DF PROTO=TCP SPT=42146 DPT=9102 SEQ=356444919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F745E0000000001030307) Oct 14 05:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:47:53 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:47:54 localhost podman[254430]: 2025-10-14 09:47:54.013395615 +0000 UTC m=+1.553412197 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:47:54 localhost podman[254430]: 2025-10-14 09:47:54.10006199 +0000 UTC m=+1.640078552 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:47:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12182 DF PROTO=TCP SPT=42146 DPT=9102 SEQ=356444919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F786A0000000001030307) Oct 14 05:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:47:56 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:47:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:56 localhost podman[254471]: 2025-10-14 09:47:56.391104543 +0000 UTC m=+0.929926765 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:47:56 localhost podman[254471]: 2025-10-14 09:47:56.430170723 +0000 UTC m=+0.968992965 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 14 05:47:57 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:47:57.605 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:47:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:47:57.606 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:47:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:47:57.606 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:47:57 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:58 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:47:58 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:47:58 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:47:58 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:47:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32925 DF PROTO=TCP SPT=45184 DPT=9100 SEQ=1338740024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F8E440000000001030307) Oct 14 05:48:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31770 DF PROTO=TCP SPT=56644 DPT=9882 SEQ=3891191277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F8EEA0000000001030307) Oct 14 05:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f-merged.mount: Deactivated successfully. Oct 14 05:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f-merged.mount: Deactivated successfully. Oct 14 05:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 14 05:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-8fb3dc6bf81a95cfcd70e4022b330b89375474ef10a51fbbe80fad5539619909-merged.mount: Deactivated successfully. Oct 14 05:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-8fb3dc6bf81a95cfcd70e4022b330b89375474ef10a51fbbe80fad5539619909-merged.mount: Deactivated successfully. Oct 14 05:48:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32927 DF PROTO=TCP SPT=45184 DPT=9100 SEQ=1338740024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761F9A690000000001030307) Oct 14 05:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:48:06 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:48:06 localhost podman[254493]: 2025-10-14 09:48:06.77283668 +0000 UTC m=+0.107640599 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:48:06 localhost podman[254493]: 2025-10-14 09:48:06.781108819 +0000 UTC m=+0.115912708 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 14 05:48:06 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:48:07 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:48:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32928 DF PROTO=TCP SPT=45184 DPT=9100 SEQ=1338740024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FAA2A0000000001030307) Oct 14 05:48:07 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:08 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:48:08 localhost podman[254512]: 2025-10-14 09:48:08.197437053 +0000 UTC m=+0.089915031 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:48:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:08 localhost podman[254513]: 2025-10-14 09:48:08.255039272 +0000 UTC m=+0.140920197 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:48:08 localhost podman[254513]: 2025-10-14 09:48:08.261901202 +0000 UTC m=+0.147782187 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:48:08 localhost podman[254513]: unhealthy Oct 14 05:48:08 localhost podman[254512]: 2025-10-14 09:48:08.285430703 +0000 UTC m=+0.177908661 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 05:48:08 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:08 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:08 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16792 DF PROTO=TCP SPT=55092 DPT=9105 SEQ=3221312693 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FB5690000000001030307) Oct 14 05:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b-merged.mount: Deactivated successfully. Oct 14 05:48:10 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:48:10 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:48:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:10 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:48:11 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:11 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:12 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:13 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:48:13 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:13 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36085 DF PROTO=TCP SPT=41854 DPT=9101 SEQ=2946990183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FC4230000000001030307) Oct 14 05:48:14 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 14 05:48:14 localhost systemd[1]: var-lib-containers-storage-overlay-79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f-merged.mount: Deactivated successfully. Oct 14 05:48:14 localhost systemd[1]: var-lib-containers-storage-overlay-79f4b0e95523a628062f3012de3b4171920b3b66bb237ad158b0a7cab481dd4f-merged.mount: Deactivated successfully. Oct 14 05:48:15 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:15 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:48:16 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost podman[254550]: 2025-10-14 09:48:17.268983673 +0000 UTC m=+0.082611738 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:48:17 localhost podman[254550]: 2025-10-14 09:48:17.276132592 +0000 UTC m=+0.089760627 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:48:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36087 DF PROTO=TCP SPT=41854 DPT=9101 SEQ=2946990183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FD0290000000001030307) Oct 14 05:48:17 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:48:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:17 localhost podman[254551]: 2025-10-14 09:48:17.540633254 +0000 UTC m=+0.349529694 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:48:17 localhost podman[254551]: 2025-10-14 09:48:17.550795462 +0000 UTC m=+0.359691892 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:19 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:19 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:48:19 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 14 05:48:19 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:48:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:20 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:20 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:20 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36088 DF PROTO=TCP SPT=41854 DPT=9101 SEQ=2946990183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FDFE90000000001030307) Oct 14 05:48:21 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:21 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:21 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:22 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:22 localhost systemd[1]: var-lib-containers-storage-overlay-6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9-merged.mount: Deactivated successfully. Oct 14 05:48:22 localhost systemd[1]: var-lib-containers-storage-overlay-6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9-merged.mount: Deactivated successfully. Oct 14 05:48:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16068 DF PROTO=TCP SPT=51532 DPT=9102 SEQ=3734405888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FE98E0000000001030307) Oct 14 05:48:24 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:48:24 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:24 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:24 localhost podman[254586]: 2025-10-14 09:48:24.238991705 +0000 UTC m=+0.113279126 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:48:24 localhost podman[254586]: 2025-10-14 09:48:24.255166272 +0000 UTC m=+0.129453713 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 05:48:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16069 DF PROTO=TCP SPT=51532 DPT=9102 SEQ=3734405888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A761FEDA90000000001030307) Oct 14 05:48:25 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 14 05:48:25 localhost systemd[1]: var-lib-containers-storage-overlay-f38cf64113906d8d9cc4f52e4d7c35a8819ff15f0f107851ac6093a00022f05b-merged.mount: Deactivated successfully. Oct 14 05:48:25 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:25 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:25 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:25 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:48:26 localhost podman[254607]: 2025-10-14 09:48:26.558073717 +0000 UTC m=+0.101085705 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller) Oct 14 05:48:26 localhost podman[254607]: 2025-10-14 09:48:26.594071146 +0000 UTC m=+0.137083104 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 14 05:48:26 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:27 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:48:27 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:28 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:28 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:48:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:48:28 localhost systemd[1]: tmp-crun.q60e6k.mount: Deactivated successfully. Oct 14 05:48:28 localhost podman[254629]: 2025-10-14 09:48:28.546596666 +0000 UTC m=+0.089547092 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:48:28 localhost podman[254629]: 2025-10-14 09:48:28.552901132 +0000 UTC m=+0.095851578 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:48:29 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:29 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:29 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:29 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:48:30 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 14 05:48:30 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:48:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:30 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47681 DF PROTO=TCP SPT=49508 DPT=9100 SEQ=686499202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762003730000000001030307) Oct 14 05:48:30 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60074 DF PROTO=TCP SPT=38118 DPT=9882 SEQ=3141608913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762004190000000001030307) Oct 14 05:48:31 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:31 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:31 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 14 05:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-141f8240b493de051d128d8af481e4eecafe4083c7fc86019e21768efb6df1ea-merged.mount: Deactivated successfully. Oct 14 05:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Oct 14 05:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 14 05:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 14 05:48:32 localhost podman[246584]: time="2025-10-14T09:48:32Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Oct 14 05:48:32 localhost podman[246584]: @ - - [14/Oct/2025:09:43:12 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Oct 14 05:48:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47683 DF PROTO=TCP SPT=49508 DPT=9100 SEQ=686499202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76200F690000000001030307) Oct 14 05:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 14 05:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9-merged.mount: Deactivated successfully. Oct 14 05:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-6637693f27f036631577218db5378dc8c17c8e585b32c036e38effbb8a457aa9-merged.mount: Deactivated successfully. Oct 14 05:48:35 localhost sshd[254653]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:48:36 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:36 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:36 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:48:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47684 DF PROTO=TCP SPT=49508 DPT=9100 SEQ=686499202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76201F290000000001030307) Oct 14 05:48:37 localhost systemd[1]: tmp-crun.kyobBR.mount: Deactivated successfully. Oct 14 05:48:37 localhost podman[254655]: 2025-10-14 09:48:37.565039266 +0000 UTC m=+0.100803408 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 05:48:37 localhost podman[254655]: 2025-10-14 09:48:37.580979296 +0000 UTC m=+0.116743428 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251009) Oct 14 05:48:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:38 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:38 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 14 05:48:38 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:48:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 14 05:48:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 14 05:48:40 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:40 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 14 05:48:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7296 DF PROTO=TCP SPT=46276 DPT=9105 SEQ=1136889280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76202A690000000001030307) Oct 14 05:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:48:41 localhost podman[254674]: 2025-10-14 09:48:41.032000007 +0000 UTC m=+0.076480527 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 05:48:41 localhost systemd[1]: tmp-crun.DdpIEw.mount: Deactivated successfully. Oct 14 05:48:41 localhost podman[254675]: 2025-10-14 09:48:41.100990956 +0000 UTC m=+0.141799139 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:48:41 localhost podman[254675]: 2025-10-14 09:48:41.107963769 +0000 UTC m=+0.148771992 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:48:41 localhost podman[254675]: unhealthy Oct 14 05:48:41 localhost podman[254674]: 2025-10-14 09:48:41.125196183 +0000 UTC m=+0.169676703 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Oct 14 05:48:41 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:48:41 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Failed with result 'exit-code'. Oct 14 05:48:41 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:48:41 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 14 05:48:41 localhost systemd[1]: var-lib-containers-storage-overlay-141f8240b493de051d128d8af481e4eecafe4083c7fc86019e21768efb6df1ea-merged.mount: Deactivated successfully. Oct 14 05:48:41 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 14 05:48:41 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Oct 14 05:48:41 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Oct 14 05:48:42 localhost podman[246584]: @ - - [14/Oct/2025:09:43:22 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 130413 "" "Go-http-client/1.1" Oct 14 05:48:42 localhost podman_exporter[246870]: ts=2025-10-14T09:48:42.058Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Oct 14 05:48:42 localhost podman_exporter[246870]: ts=2025-10-14T09:48:42.058Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Oct 14 05:48:42 localhost podman_exporter[246870]: ts=2025-10-14T09:48:42.059Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Oct 14 05:48:42 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully. Oct 14 05:48:43 localhost systemd[1]: tmp-crun.P9yGC4.mount: Deactivated successfully. Oct 14 05:48:43 localhost podman[254826]: 2025-10-14 09:48:43.649920097 +0000 UTC m=+0.093554338 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, ceph=True, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Oct 14 05:48:43 localhost podman[254826]: 2025-10-14 09:48:43.746662257 +0000 UTC m=+0.190296508 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 05:48:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1301 DF PROTO=TCP SPT=46826 DPT=9101 SEQ=1389373314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762039530000000001030307) Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.161 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.186 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.186 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.186 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.187 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.187 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.641 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.824 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.825 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13034MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.825 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.825 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.905 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.906 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:48:45 localhost nova_compute[236479]: 2025-10-14 09:48:45.930 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:48:46 localhost nova_compute[236479]: 2025-10-14 09:48:46.398 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:48:46 localhost nova_compute[236479]: 2025-10-14 09:48:46.405 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:48:46 localhost nova_compute[236479]: 2025-10-14 09:48:46.431 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:48:46 localhost nova_compute[236479]: 2025-10-14 09:48:46.433 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:48:46 localhost nova_compute[236479]: 2025-10-14 09:48:46.434 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:48:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1303 DF PROTO=TCP SPT=46826 DPT=9101 SEQ=1389373314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762045690000000001030307) Oct 14 05:48:47 localhost nova_compute[236479]: 2025-10-14 09:48:47.434 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:47 localhost nova_compute[236479]: 2025-10-14 09:48:47.435 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:48 localhost nova_compute[236479]: 2025-10-14 09:48:48.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:48 localhost nova_compute[236479]: 2025-10-14 09:48:48.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:48:48 localhost nova_compute[236479]: 2025-10-14 09:48:48.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:48:48 localhost nova_compute[236479]: 2025-10-14 09:48:48.184 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:48:48 localhost podman[255002]: 2025-10-14 09:48:48.556680631 +0000 UTC m=+0.086825770 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:48:48 localhost podman[255002]: 2025-10-14 09:48:48.567443074 +0000 UTC m=+0.097588213 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 14 05:48:48 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:48:49 localhost nova_compute[236479]: 2025-10-14 09:48:49.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:49 localhost nova_compute[236479]: 2025-10-14 09:48:49.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:49 localhost nova_compute[236479]: 2025-10-14 09:48:49.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:48:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:48:50 localhost podman[255022]: 2025-10-14 09:48:50.539039387 +0000 UTC m=+0.080354500 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:48:50 localhost podman[255022]: 2025-10-14 09:48:50.549611455 +0000 UTC m=+0.090926588 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Oct 14 05:48:50 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:48:51 localhost nova_compute[236479]: 2025-10-14 09:48:51.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:48:51 localhost nova_compute[236479]: 2025-10-14 09:48:51.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:48:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1304 DF PROTO=TCP SPT=46826 DPT=9101 SEQ=1389373314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762055290000000001030307) Oct 14 05:48:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6314 DF PROTO=TCP SPT=49216 DPT=9102 SEQ=1975552721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76205EBE0000000001030307) Oct 14 05:48:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6315 DF PROTO=TCP SPT=49216 DPT=9102 SEQ=1975552721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762062AA0000000001030307) Oct 14 05:48:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:48:56 localhost systemd[1]: tmp-crun.3tEeSM.mount: Deactivated successfully. Oct 14 05:48:56 localhost podman[255041]: 2025-10-14 09:48:56.546042444 +0000 UTC m=+0.087951799 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 05:48:56 localhost podman[255041]: 2025-10-14 09:48:56.557883656 +0000 UTC m=+0.099793081 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350) Oct 14 05:48:56 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:48:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:48:57.607 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:48:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:48:57.607 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:48:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:48:57.608 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:48:58 localhost systemd[1]: tmp-crun.xNpdoh.mount: Deactivated successfully. Oct 14 05:48:58 localhost podman[255061]: 2025-10-14 09:48:58.54319334 +0000 UTC m=+0.081278124 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 14 05:48:58 localhost podman[255061]: 2025-10-14 09:48:58.602980435 +0000 UTC m=+0.141065209 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:48:58 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:49:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49422 DF PROTO=TCP SPT=39580 DPT=9100 SEQ=3813137254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762078A40000000001030307) Oct 14 05:49:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:49:00 localhost podman[255086]: 2025-10-14 09:49:00.533648159 +0000 UTC m=+0.075100199 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:49:00 localhost podman[255086]: 2025-10-14 09:49:00.546268982 +0000 UTC m=+0.087720982 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:49:00 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:49:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15007 DF PROTO=TCP SPT=51666 DPT=9882 SEQ=1173175517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620794A0000000001030307) Oct 14 05:49:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49424 DF PROTO=TCP SPT=39580 DPT=9100 SEQ=3813137254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762084A90000000001030307) Oct 14 05:49:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49425 DF PROTO=TCP SPT=39580 DPT=9100 SEQ=3813137254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620946A0000000001030307) Oct 14 05:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:49:08 localhost podman[255109]: 2025-10-14 09:49:08.552840449 +0000 UTC m=+0.093836143 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:49:08 localhost podman[255109]: 2025-10-14 09:49:08.592077053 +0000 UTC m=+0.133072767 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:49:08 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:49:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15856 DF PROTO=TCP SPT=35684 DPT=9105 SEQ=4106575320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76209FA90000000001030307) Oct 14 05:49:12 localhost sshd[255129]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:49:12 localhost systemd-logind[760]: New session 58 of user zuul. Oct 14 05:49:12 localhost systemd[1]: Started Session 58 of User zuul. Oct 14 05:49:12 localhost podman[255132]: 2025-10-14 09:49:12.572807067 +0000 UTC m=+0.109256981 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:49:12 localhost podman[255131]: 2025-10-14 09:49:12.600681912 +0000 UTC m=+0.139186461 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:49:12 localhost podman[255131]: 2025-10-14 09:49:12.629387938 +0000 UTC m=+0.167892547 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Oct 14 05:49:12 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:49:12 localhost podman[255132]: 2025-10-14 09:49:12.681174444 +0000 UTC m=+0.217624428 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:49:12 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:49:13 localhost python3.9[255263]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:13 localhost python3.9[255373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17825 DF PROTO=TCP SPT=56000 DPT=9101 SEQ=2326060634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620AE830000000001030307) Oct 14 05:49:14 localhost python3.9[255461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760435353.4909422-3167-94495168515862/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:15 localhost python3.9[255571]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:16 localhost python3.9[255681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17827 DF PROTO=TCP SPT=56000 DPT=9101 SEQ=2326060634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620BAAA0000000001030307) Oct 14 05:49:17 localhost python3.9[255738]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:49:19 localhost podman[255849]: 2025-10-14 09:49:19.081200801 +0000 UTC m=+0.095086798 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3) Oct 14 05:49:19 localhost podman[255849]: 2025-10-14 09:49:19.095134398 +0000 UTC m=+0.109016565 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 05:49:19 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:49:19 localhost python3.9[255848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:19 localhost python3.9[255924]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9rmo6zzf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:20 localhost python3.9[256034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:49:20 localhost podman[256092]: 2025-10-14 09:49:20.763792455 +0000 UTC m=+0.080156304 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:49:20 localhost podman[256092]: 2025-10-14 09:49:20.77916634 +0000 UTC m=+0.095530209 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 14 05:49:20 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:49:20 localhost python3.9[256091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17828 DF PROTO=TCP SPT=56000 DPT=9101 SEQ=2326060634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620CA690000000001030307) Oct 14 05:49:21 localhost python3.9[256220]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:49:22 localhost python3[256331]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 14 05:49:23 localhost python3.9[256441]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60332 DF PROTO=TCP SPT=45092 DPT=9102 SEQ=1461854335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620D3EE0000000001030307) Oct 14 05:49:23 localhost python3.9[256498]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:24 localhost python3.9[256608]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60333 DF PROTO=TCP SPT=45092 DPT=9102 SEQ=1461854335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620D7E90000000001030307) Oct 14 05:49:25 localhost python3.9[256665]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:25 localhost python3.9[256775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:26 localhost python3.9[256832]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:49:26 localhost podman[256910]: 2025-10-14 09:49:26.782458159 +0000 UTC m=+0.088438542 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.) Oct 14 05:49:26 localhost podman[256910]: 2025-10-14 09:49:26.821070587 +0000 UTC m=+0.127050980 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7) Oct 14 05:49:26 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:49:27 localhost python3.9[256980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:27 localhost python3.9[257037]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:49:29 localhost podman[257148]: 2025-10-14 09:49:29.187465757 +0000 UTC m=+0.077233948 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:49:29 localhost podman[257148]: 2025-10-14 09:49:29.22706557 +0000 UTC m=+0.116833801 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 05:49:29 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:49:29 localhost python3.9[257147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:29 localhost python3.9[257262]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760435368.7044342-3542-260945035488935/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47358 DF PROTO=TCP SPT=52306 DPT=9100 SEQ=1339089207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620EDD40000000001030307) Oct 14 05:49:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62267 DF PROTO=TCP SPT=45280 DPT=9882 SEQ=1364373774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620EE790000000001030307) Oct 14 05:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:49:31 localhost podman[257373]: 2025-10-14 09:49:31.327819617 +0000 UTC m=+0.079977440 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:49:31 localhost podman[257373]: 2025-10-14 09:49:31.335676614 +0000 UTC m=+0.087834437 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:49:31 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:49:31 localhost python3.9[257372]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:32 localhost python3.9[257505]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:49:33 localhost python3.9[257618]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47360 DF PROTO=TCP SPT=52306 DPT=9100 SEQ=1339089207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7620F9E90000000001030307) Oct 14 05:49:33 localhost python3.9[257728]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:49:34 localhost python3.9[257839]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:49:35 localhost python3.9[257951]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:49:36 localhost python3.9[258064]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:49:36 localhost openstack_network_exporter[248748]: ERROR 09:49:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:49:36 localhost openstack_network_exporter[248748]: ERROR 09:49:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:49:36 localhost openstack_network_exporter[248748]: ERROR 09:49:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:49:36 localhost openstack_network_exporter[248748]: ERROR 09:49:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:49:36 localhost openstack_network_exporter[248748]: Oct 14 05:49:36 localhost openstack_network_exporter[248748]: ERROR 09:49:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:49:36 localhost openstack_network_exporter[248748]: Oct 14 05:49:36 localhost systemd[1]: session-58.scope: Deactivated successfully. Oct 14 05:49:36 localhost systemd[1]: session-58.scope: Consumed 13.548s CPU time. Oct 14 05:49:36 localhost systemd-logind[760]: Session 58 logged out. Waiting for processes to exit. Oct 14 05:49:36 localhost systemd-logind[760]: Removed session 58. Oct 14 05:49:37 localhost podman[246584]: time="2025-10-14T09:49:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:49:37 localhost podman[246584]: @ - - [14/Oct/2025:09:49:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 132062 "" "Go-http-client/1.1" Oct 14 05:49:37 localhost podman[246584]: @ - - [14/Oct/2025:09:49:37 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15976 "" "Go-http-client/1.1" Oct 14 05:49:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:49:39 localhost podman[258088]: 2025-10-14 09:49:39.548670092 +0000 UTC m=+0.088474172 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 05:49:39 localhost podman[258088]: 2025-10-14 09:49:39.585066772 +0000 UTC m=+0.124870812 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:49:39 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:49:43 localhost sshd[258107]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:49:43 localhost systemd-logind[760]: New session 59 of user zuul. Oct 14 05:49:43 localhost systemd[1]: Started Session 59 of User zuul. Oct 14 05:49:43 localhost podman[258110]: 2025-10-14 09:49:43.547713629 +0000 UTC m=+0.082370382 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:49:43 localhost systemd[1]: tmp-crun.paWa5q.mount: Deactivated successfully. Oct 14 05:49:43 localhost podman[258109]: 2025-10-14 09:49:43.613626377 +0000 UTC m=+0.147752496 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:49:43 localhost podman[258110]: 2025-10-14 09:49:43.633345267 +0000 UTC m=+0.168002010 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:49:43 localhost podman[258109]: 2025-10-14 09:49:43.643463733 +0000 UTC m=+0.177589802 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:49:43 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:49:43 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:49:45 localhost python3.9[258262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:45 localhost nova_compute[236479]: 2025-10-14 09:49:45.160 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:45 localhost python3.9[258372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.194 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.195 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.195 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.196 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.196 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:49:46 localhost python3.9[258483]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.653 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.822 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.825 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=13021MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.825 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.826 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.909 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.910 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:49:46 localhost nova_compute[236479]: 2025-10-14 09:49:46.933 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:49:47 localhost python3.9[258632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:47 localhost nova_compute[236479]: 2025-10-14 09:49:47.392 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:49:47 localhost nova_compute[236479]: 2025-10-14 09:49:47.398 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:49:47 localhost nova_compute[236479]: 2025-10-14 09:49:47.430 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:49:47 localhost nova_compute[236479]: 2025-10-14 09:49:47.433 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:49:47 localhost nova_compute[236479]: 2025-10-14 09:49:47.433 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:49:48 localhost python3.9[258720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435386.663614-104-78732602204566/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:48 localhost python3.9[258828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:49 localhost python3.9[258914]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435388.3012972-149-192014721718817/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.431 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.463 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.464 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.464 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.483 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.483 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:49 localhost nova_compute[236479]: 2025-10-14 09:49:49.484 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:49 localhost podman[258932]: 2025-10-14 09:49:49.537690457 +0000 UTC m=+0.078366596 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:49:49 localhost podman[258932]: 2025-10-14 09:49:49.549138979 +0000 UTC m=+0.089815098 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:49:49 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:49:49 localhost python3.9[259041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:50 localhost nova_compute[236479]: 2025-10-14 09:49:50.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:50 localhost nova_compute[236479]: 2025-10-14 09:49:50.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:50 localhost python3.9[259127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435389.5149498-149-255979271474356/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:51 localhost python3.9[259235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:49:51 localhost podman[259292]: 2025-10-14 09:49:51.541062057 +0000 UTC m=+0.081172561 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:49:51 localhost podman[259292]: 2025-10-14 09:49:51.555179449 +0000 UTC m=+0.095289913 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:49:51 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:49:51 localhost python3.9[259334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435390.7308183-149-133056289978276/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=a3a9b60551b1ee8a657897097b0021b194825d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:52 localhost python3.9[259447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:53 localhost nova_compute[236479]: 2025-10-14 09:49:53.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:49:53 localhost nova_compute[236479]: 2025-10-14 09:49:53.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:49:53 localhost python3.9[259533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435392.4715676-323-277007300113848/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=401f2db3441c75ad5886350294091560f714495b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34740 DF PROTO=TCP SPT=41628 DPT=9102 SEQ=1419975468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621491E0000000001030307) Oct 14 05:49:54 localhost python3.9[259641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:49:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34741 DF PROTO=TCP SPT=41628 DPT=9102 SEQ=1419975468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76214D290000000001030307) Oct 14 05:49:55 localhost python3.9[259753]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:55 localhost python3.9[259863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34742 DF PROTO=TCP SPT=41628 DPT=9102 SEQ=1419975468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621552A0000000001030307) Oct 14 05:49:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:49:57 localhost systemd[1]: tmp-crun.Yk8DYa.mount: Deactivated successfully. Oct 14 05:49:57 localhost podman[259921]: 2025-10-14 09:49:57.191673921 +0000 UTC m=+0.099461793 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Oct 14 05:49:57 localhost podman[259921]: 2025-10-14 09:49:57.20378238 +0000 UTC m=+0.111570252 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:49:57 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:49:57 localhost python3.9[259920]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:49:57.609 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:49:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:49:57.609 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:49:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:49:57.610 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:49:57 localhost python3.9[260051]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:49:58 localhost python3.9[260108]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:49:59 localhost systemd[1]: tmp-crun.QhxbjL.mount: Deactivated successfully. Oct 14 05:49:59 localhost podman[260164]: 2025-10-14 09:49:59.537472817 +0000 UTC m=+0.078380597 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Oct 14 05:49:59 localhost podman[260164]: 2025-10-14 09:49:59.604239947 +0000 UTC m=+0.145147707 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible) Oct 14 05:49:59 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:50:00 localhost python3.9[260243]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:00 localhost podman[246584]: time="2025-10-14T09:50:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:50:00 localhost podman[246584]: @ - - [14/Oct/2025:09:50:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 132062 "" "Go-http-client/1.1" Oct 14 05:50:00 localhost podman[246584]: @ - - [14/Oct/2025:09:50:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15979 "" "Go-http-client/1.1" Oct 14 05:50:00 localhost python3.9[260353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34743 DF PROTO=TCP SPT=41628 DPT=9102 SEQ=1419975468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762164E90000000001030307) Oct 14 05:50:01 localhost python3.9[260410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:50:01 localhost systemd[1]: tmp-crun.LHHGDW.mount: Deactivated successfully. Oct 14 05:50:01 localhost podman[260444]: 2025-10-14 09:50:01.550421649 +0000 UTC m=+0.091712688 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:50:01 localhost podman[260444]: 2025-10-14 09:50:01.556708785 +0000 UTC m=+0.097999844 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:50:01 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:50:01 localhost python3.9[260545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:02 localhost python3.9[260602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:03 localhost openstack_network_exporter[248748]: ERROR 09:50:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:50:03 localhost openstack_network_exporter[248748]: ERROR 09:50:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:50:03 localhost openstack_network_exporter[248748]: ERROR 09:50:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:50:03 localhost openstack_network_exporter[248748]: ERROR 09:50:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:50:03 localhost openstack_network_exporter[248748]: Oct 14 05:50:03 localhost openstack_network_exporter[248748]: ERROR 09:50:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:50:03 localhost openstack_network_exporter[248748]: Oct 14 05:50:03 localhost python3.9[260713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:50:03 localhost systemd[1]: Reloading. Oct 14 05:50:03 localhost systemd-rc-local-generator[260734]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:50:03 localhost systemd-sysv-generator[260738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:50:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:50:04 localhost python3.9[260861]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:05 localhost python3.9[260918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:06 localhost python3.9[261028]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:06 localhost python3.9[261085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:07 localhost python3.9[261195]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:50:07 localhost systemd[1]: Reloading. Oct 14 05:50:07 localhost systemd-rc-local-generator[261218]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:50:07 localhost systemd-sysv-generator[261221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:50:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:50:07 localhost systemd[1]: Starting Create netns directory... Oct 14 05:50:07 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:50:07 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:50:07 localhost systemd[1]: Finished Create netns directory. Oct 14 05:50:08 localhost python3.9[261347]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:50:09 localhost podman[261445]: 2025-10-14 09:50:09.721925586 +0000 UTC m=+0.086669832 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:50:09 localhost podman[261445]: 2025-10-14 09:50:09.738045036 +0000 UTC m=+0.102789272 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 14 05:50:09 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:50:09 localhost python3.9[261468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:10 localhost python3.9[261565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760435409.416222-734-65296646766074/.source.json _original_basename=.lo1aqv7n follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:11 localhost python3.9[261675]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:50:14 localhost podman[261962]: 2025-10-14 09:50:14.545637719 +0000 UTC m=+0.081142295 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent) Oct 14 05:50:14 localhost podman[261963]: 2025-10-14 09:50:14.599498505 +0000 UTC m=+0.131990520 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:50:14 localhost podman[261963]: 2025-10-14 09:50:14.612052741 +0000 UTC m=+0.144544806 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:50:14 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:50:14 localhost podman[261962]: 2025-10-14 09:50:14.628993262 +0000 UTC m=+0.164497838 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Oct 14 05:50:14 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:50:14 localhost python3.9[262004]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Oct 14 05:50:15 localhost python3.9[262135]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:50:16 localhost python3.9[262245]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:50:20 localhost podman[262328]: 2025-10-14 09:50:20.54748615 +0000 UTC m=+0.084642039 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid) Oct 14 05:50:20 localhost podman[262328]: 2025-10-14 09:50:20.556629233 +0000 UTC m=+0.093785132 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:50:20 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:50:21 localhost python3[262402]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:50:21 localhost podman[262439]: Oct 14 05:50:21 localhost podman[262439]: 2025-10-14 09:50:21.320544495 +0000 UTC m=+0.072742251 container create e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=neutron_sriov_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Oct 14 05:50:21 localhost podman[262439]: 2025-10-14 09:50:21.27801223 +0000 UTC m=+0.030210016 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 14 05:50:21 localhost python3[262402]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 14 05:50:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:50:22 localhost podman[262585]: 2025-10-14 09:50:22.190679828 +0000 UTC m=+0.088707346 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 05:50:22 localhost podman[262585]: 2025-10-14 09:50:22.228452266 +0000 UTC m=+0.126479784 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:50:22 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:50:22 localhost python3.9[262586]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:50:23 localhost python3.9[262716]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30926 DF PROTO=TCP SPT=45376 DPT=9102 SEQ=232958554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621BE4F0000000001030307) Oct 14 05:50:24 localhost python3.9[262771]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:50:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30927 DF PROTO=TCP SPT=45376 DPT=9102 SEQ=232958554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621C2690000000001030307) Oct 14 05:50:24 localhost python3.9[262880]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435424.2681658-998-71537032156679/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:50:26 localhost python3.9[262935]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:50:26 localhost systemd[1]: Reloading. Oct 14 05:50:26 localhost systemd-rc-local-generator[262957]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:50:26 localhost systemd-sysv-generator[262964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:50:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:50:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30928 DF PROTO=TCP SPT=45376 DPT=9102 SEQ=232958554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621CA6A0000000001030307) Oct 14 05:50:27 localhost python3.9[263026]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:50:27 localhost systemd[1]: tmp-crun.BJ8MmT.mount: Deactivated successfully. Oct 14 05:50:27 localhost podman[263079]: 2025-10-14 09:50:27.556361905 +0000 UTC m=+0.095111258 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container) Oct 14 05:50:27 localhost podman[263079]: 2025-10-14 09:50:27.577068117 +0000 UTC m=+0.115817520 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, distribution-scope=public, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:50:27 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:50:28 localhost systemd[1]: Reloading. Oct 14 05:50:28 localhost systemd-rc-local-generator[263158]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:50:28 localhost systemd-sysv-generator[263162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:50:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:50:28 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 14 05:50:28 localhost systemd[1]: Started libcrun container. Oct 14 05:50:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e2d52a9ec6e0800e4e582905e211db2419ef42bfbb756c591b1b49a8799274/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 14 05:50:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e2d52a9ec6e0800e4e582905e211db2419ef42bfbb756c591b1b49a8799274/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 05:50:28 localhost podman[263171]: 2025-10-14 09:50:28.5979404 +0000 UTC m=+0.122978080 container init e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:50:28 localhost podman[263171]: 2025-10-14 09:50:28.607372802 +0000 UTC m=+0.132410482 container start e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:50:28 localhost podman[263171]: neutron_sriov_agent Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + sudo -E kolla_set_configs Oct 14 05:50:28 localhost systemd[1]: Started neutron_sriov_agent container. Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Validating config file Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Copying service configuration files Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Writing out command to execute Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: ++ cat /run_command Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + ARGS= Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + sudo kolla_copy_cacerts Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + [[ ! -n '' ]] Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + . kolla_extend_start Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + umask 0022 Oct 14 05:50:28 localhost neutron_sriov_agent[263185]: + exec /usr/bin/neutron-sriov-nic-agent Oct 14 05:50:29 localhost python3.9[263309]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:50:29 localhost systemd[1]: Stopping neutron_sriov_agent container... Oct 14 05:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:50:29 localhost systemd[1]: libpod-e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1.scope: Deactivated successfully. Oct 14 05:50:29 localhost podman[263313]: 2025-10-14 09:50:29.647715844 +0000 UTC m=+0.069959957 container died e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent) Oct 14 05:50:29 localhost systemd[1]: tmp-crun.2RWZCW.mount: Deactivated successfully. Oct 14 05:50:29 localhost podman[263326]: 2025-10-14 09:50:29.720870345 +0000 UTC m=+0.067958933 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller) Oct 14 05:50:29 localhost podman[263313]: 2025-10-14 09:50:29.745676336 +0000 UTC m=+0.167920419 container cleanup e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=neutron_sriov_agent, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 05:50:29 localhost podman[263313]: neutron_sriov_agent Oct 14 05:50:29 localhost podman[263325]: 2025-10-14 09:50:29.748889822 +0000 UTC m=+0.088685245 container cleanup e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:50:29 localhost podman[263326]: 2025-10-14 09:50:29.756539866 +0000 UTC m=+0.103628404 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:50:29 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:50:29 localhost podman[263362]: 2025-10-14 09:50:29.832478471 +0000 UTC m=+0.054808182 container cleanup e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=neutron_sriov_agent) Oct 14 05:50:29 localhost podman[263362]: neutron_sriov_agent Oct 14 05:50:29 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Oct 14 05:50:29 localhost systemd[1]: Stopped neutron_sriov_agent container. Oct 14 05:50:29 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 14 05:50:29 localhost systemd[1]: Started libcrun container. Oct 14 05:50:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e2d52a9ec6e0800e4e582905e211db2419ef42bfbb756c591b1b49a8799274/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 14 05:50:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e2d52a9ec6e0800e4e582905e211db2419ef42bfbb756c591b1b49a8799274/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 05:50:29 localhost podman[263374]: 2025-10-14 09:50:29.978201437 +0000 UTC m=+0.107284411 container init e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=neutron_sriov_agent, org.label-schema.build-date=20251009) Oct 14 05:50:29 localhost podman[263374]: 2025-10-14 09:50:29.985922263 +0000 UTC m=+0.115005247 container start e6d616af39a45e39616d225d030b37983b695ceb94fbea276b599002c60ca5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4cc69ad8ed018805bc6d0098013148b95d7c2debacba321671915c5ef7cd395a'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Oct 14 05:50:29 localhost podman[263374]: neutron_sriov_agent Oct 14 05:50:29 localhost neutron_sriov_agent[263389]: + sudo -E kolla_set_configs Oct 14 05:50:29 localhost systemd[1]: Started neutron_sriov_agent container. Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Validating config file Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Copying service configuration files Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Writing out command to execute Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: ++ cat /run_command Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + ARGS= Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + sudo kolla_copy_cacerts Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + [[ ! -n '' ]] Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + . kolla_extend_start Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + umask 0022 Oct 14 05:50:30 localhost neutron_sriov_agent[263389]: + exec /usr/bin/neutron-sriov-nic-agent Oct 14 05:50:30 localhost systemd[1]: session-59.scope: Deactivated successfully. Oct 14 05:50:30 localhost systemd[1]: session-59.scope: Consumed 23.776s CPU time. Oct 14 05:50:30 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Oct 14 05:50:30 localhost systemd-logind[760]: Removed session 59. Oct 14 05:50:30 localhost podman[246584]: time="2025-10-14T09:50:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:50:30 localhost podman[246584]: @ - - [14/Oct/2025:09:50:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134020 "" "Go-http-client/1.1" Oct 14 05:50:30 localhost podman[246584]: @ - - [14/Oct/2025:09:50:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16410 "" "Go-http-client/1.1" Oct 14 05:50:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30929 DF PROTO=TCP SPT=45376 DPT=9102 SEQ=232958554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7621DA2A0000000001030307) Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.711 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005486731.localdomain'}#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.711 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] RPC agent_id: nic-switch-agent.np0005486731.localdomain#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.715 2 INFO neutron.agent.agent_extensions_manager [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] Loaded agent extensions: ['qos']#033[00m Oct 14 05:50:31 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:31.716 2 INFO neutron.agent.agent_extensions_manager [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] Initializing agent extension 'qos'#033[00m Oct 14 05:50:32 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:32.089 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] Agent initialized successfully, now running... #033[00m Oct 14 05:50:32 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:32.090 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Oct 14 05:50:32 localhost neutron_sriov_agent[263389]: 2025-10-14 09:50:32.090 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-319c5def-0102-4ed1-940e-3966afd6de8e - - - - - -] Agent out of sync with plugin!#033[00m Oct 14 05:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:50:32 localhost podman[263422]: 2025-10-14 09:50:32.541390019 +0000 UTC m=+0.083603501 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:50:32 localhost podman[263422]: 2025-10-14 09:50:32.575542059 +0000 UTC m=+0.117755491 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:50:32 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:50:33 localhost openstack_network_exporter[248748]: ERROR 09:50:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:50:33 localhost openstack_network_exporter[248748]: ERROR 09:50:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:50:33 localhost openstack_network_exporter[248748]: ERROR 09:50:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:50:33 localhost openstack_network_exporter[248748]: ERROR 09:50:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:50:33 localhost openstack_network_exporter[248748]: Oct 14 05:50:33 localhost openstack_network_exporter[248748]: ERROR 09:50:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:50:33 localhost openstack_network_exporter[248748]: Oct 14 05:50:37 localhost sshd[263446]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:50:37 localhost systemd-logind[760]: New session 60 of user zuul. Oct 14 05:50:37 localhost systemd[1]: Started Session 60 of User zuul. Oct 14 05:50:38 localhost python3.9[263557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:50:39 localhost python3.9[263671]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:50:40 localhost systemd[1]: tmp-crun.melgKr.mount: Deactivated successfully. Oct 14 05:50:40 localhost podman[263680]: 2025-10-14 09:50:40.565074746 +0000 UTC m=+0.103358387 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:50:40 localhost podman[263680]: 2025-10-14 09:50:40.608023832 +0000 UTC m=+0.146307473 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 05:50:40 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:50:41 localhost python3.9[263753]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:50:45 localhost systemd[1]: tmp-crun.O3F1QS.mount: Deactivated successfully. Oct 14 05:50:45 localhost podman[263866]: 2025-10-14 09:50:45.100552503 +0000 UTC m=+0.101361424 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:50:45 localhost podman[263867]: 2025-10-14 09:50:45.139258385 +0000 UTC m=+0.136262275 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:50:45 localhost podman[263867]: 2025-10-14 09:50:45.151143272 +0000 UTC m=+0.148147162 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:50:45 localhost podman[263866]: 2025-10-14 09:50:45.159481664 +0000 UTC m=+0.160290535 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:50:45 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:50:45 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:50:45 localhost python3.9[263865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.160 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.194 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.194 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.195 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.195 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.196 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:50:46 localhost python3.9[264019]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.667 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.881 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.882 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12911MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.883 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.883 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.952 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:50:46 localhost nova_compute[236479]: 2025-10-14 09:50:46.952 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:50:46 localhost python3.9[264151]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.013 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.487 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.494 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.515 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.518 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:50:47 localhost nova_compute[236479]: 2025-10-14 09:50:47.518 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:50:47 localhost python3.9[264281]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:48 localhost python3.9[264393]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:48 localhost nova_compute[236479]: 2025-10-14 09:50:48.519 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:48 localhost python3.9[264503]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:49 localhost nova_compute[236479]: 2025-10-14 09:50:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:49 localhost nova_compute[236479]: 2025-10-14 09:50:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:49 localhost python3.9[264613]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:50:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:50:50 localhost nova_compute[236479]: 2025-10-14 09:50:50.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:50 localhost nova_compute[236479]: 2025-10-14 09:50:50.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:50:50 localhost nova_compute[236479]: 2025-10-14 09:50:50.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:50:50 localhost nova_compute[236479]: 2025-10-14 09:50:50.185 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:50:50 localhost nova_compute[236479]: 2025-10-14 09:50:50.185 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:50 localhost python3.9[264723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:50:51 localhost podman[264834]: 2025-10-14 09:50:51.115082301 +0000 UTC m=+0.088106960 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 05:50:51 localhost podman[264834]: 2025-10-14 09:50:51.128258083 +0000 UTC m=+0.101282722 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:50:51 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:50:51 localhost python3.9[264833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:52 localhost nova_compute[236479]: 2025-10-14 09:50:52.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:50:52 localhost podman[264885]: 2025-10-14 09:50:52.541108139 +0000 UTC m=+0.080398574 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 05:50:52 localhost podman[264885]: 2025-10-14 09:50:52.580621433 +0000 UTC m=+0.119911808 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 05:50:52 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:50:52 localhost python3.9[264959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435450.5161128-278-73329048387892/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:53 localhost python3.9[265067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34787 DF PROTO=TCP SPT=34970 DPT=9102 SEQ=3048100589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7622337E0000000001030307) Oct 14 05:50:54 localhost nova_compute[236479]: 2025-10-14 09:50:54.166 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:50:54 localhost nova_compute[236479]: 2025-10-14 09:50:54.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:50:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34788 DF PROTO=TCP SPT=34970 DPT=9102 SEQ=3048100589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762237690000000001030307) Oct 14 05:50:55 localhost python3.9[265153]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435453.2331626-323-263122067310784/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:55 localhost python3.9[265261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:56 localhost python3.9[265347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435455.3505137-323-241718080125436/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34789 DF PROTO=TCP SPT=34970 DPT=9102 SEQ=3048100589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76223F690000000001030307) Oct 14 05:50:57 localhost python3.9[265455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:57 localhost python3.9[265541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435456.491372-323-4318935556237/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=289778de27e4f4651425ec51d262936812a433e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:50:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:50:57.611 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:50:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:50:57.611 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:50:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:50:57.611 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:50:58 localhost podman[265586]: 2025-10-14 09:50:58.55158256 +0000 UTC m=+0.088458910 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible) Oct 14 05:50:58 localhost podman[265586]: 2025-10-14 09:50:58.567063483 +0000 UTC m=+0.103939893 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 14 05:50:58 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:50:58 localhost python3.9[265670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:50:59 localhost python3.9[265756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435458.4231617-497-59111583447702/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=401f2db3441c75ad5886350294091560f714495b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:00 localhost python3.9[265864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:51:00 localhost podman[265929]: 2025-10-14 09:51:00.54731063 +0000 UTC m=+0.085973253 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 05:51:00 localhost podman[246584]: time="2025-10-14T09:51:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:51:00 localhost podman[246584]: @ - - [14/Oct/2025:09:51:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134020 "" "Go-http-client/1.1" Oct 14 05:51:00 localhost podman[265929]: 2025-10-14 09:51:00.634037663 +0000 UTC m=+0.172700306 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Oct 14 05:51:00 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:51:00 localhost podman[246584]: @ - - [14/Oct/2025:09:51:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16424 "" "Go-http-client/1.1" Oct 14 05:51:00 localhost python3.9[265963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435459.6602366-542-270308685518973/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34790 DF PROTO=TCP SPT=34970 DPT=9102 SEQ=3048100589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76224F2A0000000001030307) Oct 14 05:51:01 localhost python3.9[266083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:02 localhost python3.9[266169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435460.941085-542-156218897077304/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:02 localhost sshd[266258]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:51:02 localhost python3.9[266279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:51:02 localhost podman[266280]: 2025-10-14 09:51:02.876061961 +0000 UTC m=+0.074094637 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:51:02 localhost podman[266280]: 2025-10-14 09:51:02.888113862 +0000 UTC m=+0.086146538 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:51:02 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:51:03 localhost python3.9[266358]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:03 localhost openstack_network_exporter[248748]: ERROR 09:51:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:51:03 localhost openstack_network_exporter[248748]: ERROR 09:51:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:51:03 localhost openstack_network_exporter[248748]: ERROR 09:51:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:51:03 localhost openstack_network_exporter[248748]: ERROR 09:51:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:51:03 localhost openstack_network_exporter[248748]: Oct 14 05:51:03 localhost openstack_network_exporter[248748]: ERROR 09:51:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:51:03 localhost openstack_network_exporter[248748]: Oct 14 05:51:03 localhost python3.9[266466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:04 localhost python3.9[266552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435463.4440994-629-170621336003688/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:05 localhost python3.9[266660]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:51:05 localhost python3.9[266772]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:07 localhost python3.9[266882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:08 localhost python3.9[266939]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:08 localhost python3.9[267049]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:10 localhost python3.9[267106]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:10 localhost python3.9[267216]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:51:11 localhost systemd[1]: tmp-crun.ate3dt.mount: Deactivated successfully. Oct 14 05:51:11 localhost podman[267327]: 2025-10-14 09:51:11.384870835 +0000 UTC m=+0.110897849 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:51:11 localhost podman[267327]: 2025-10-14 09:51:11.397119751 +0000 UTC m=+0.123146755 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:51:11 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:51:11 localhost python3.9[267326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:11 localhost python3.9[267403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:12 localhost python3.9[267513]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:13 localhost python3.9[267570]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:13 localhost python3.9[267680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:51:13 localhost systemd[1]: Reloading. Oct 14 05:51:14 localhost systemd-sysv-generator[267712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:51:14 localhost systemd-rc-local-generator[267709]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:51:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:51:15 localhost python3.9[267828]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:51:15 localhost systemd[1]: tmp-crun.MO2Z8Q.mount: Deactivated successfully. Oct 14 05:51:15 localhost podman[267883]: 2025-10-14 09:51:15.560626739 +0000 UTC m=+0.097736447 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009) Oct 14 05:51:15 localhost podman[267883]: 2025-10-14 09:51:15.570946624 +0000 UTC m=+0.108056342 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent) Oct 14 05:51:15 localhost systemd[1]: tmp-crun.E9RcVv.mount: Deactivated successfully. Oct 14 05:51:15 localhost podman[267884]: 2025-10-14 09:51:15.615258275 +0000 UTC m=+0.152904418 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:51:15 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:51:15 localhost podman[267884]: 2025-10-14 09:51:15.679388026 +0000 UTC m=+0.217034149 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:51:15 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:51:15 localhost python3.9[267905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:16 localhost python3.9[268036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:16 localhost python3.9[268093]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:17 localhost python3.9[268203]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:51:17 localhost systemd[1]: Reloading. Oct 14 05:51:17 localhost systemd-rc-local-generator[268230]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:51:17 localhost systemd-sysv-generator[268233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:51:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:51:18 localhost systemd[1]: Starting Create netns directory... Oct 14 05:51:18 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:51:18 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:51:18 localhost systemd[1]: Finished Create netns directory. Oct 14 05:51:19 localhost python3.9[268355]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:51:20 localhost python3.9[268465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:51:20 localhost python3.9[268553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760435479.6136606-1073-240259842610768/.source.json _original_basename=.8il2f7fm follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:51:21 localhost systemd[1]: tmp-crun.kBlWeQ.mount: Deactivated successfully. Oct 14 05:51:21 localhost podman[268625]: 2025-10-14 09:51:21.554055084 +0000 UTC m=+0.091102901 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS) Oct 14 05:51:21 localhost podman[268625]: 2025-10-14 09:51:21.564639477 +0000 UTC m=+0.101687254 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 05:51:21 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:51:22 localhost python3.9[268682]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:51:23 localhost podman[268793]: 2025-10-14 09:51:23.495429285 +0000 UTC m=+0.079083710 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:51:23 localhost podman[268793]: 2025-10-14 09:51:23.53500155 +0000 UTC m=+0.118655935 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:51:23 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:51:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42933 DF PROTO=TCP SPT=51274 DPT=9102 SEQ=306075435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7622A8AE0000000001030307) Oct 14 05:51:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42934 DF PROTO=TCP SPT=51274 DPT=9102 SEQ=306075435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7622ACA90000000001030307) Oct 14 05:51:26 localhost python3.9[269010]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Oct 14 05:51:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42935 DF PROTO=TCP SPT=51274 DPT=9102 SEQ=306075435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7622B4A90000000001030307) Oct 14 05:51:27 localhost python3.9[269121]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:51:28 localhost python3.9[269231]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:51:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:51:28 localhost podman[269294]: 2025-10-14 09:51:28.819004096 +0000 UTC m=+0.086814896 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:51:28 localhost podman[269294]: 2025-10-14 09:51:28.834152641 +0000 UTC m=+0.101963471 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 05:51:28 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:51:30 localhost podman[246584]: time="2025-10-14T09:51:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:51:30 localhost podman[246584]: @ - - [14/Oct/2025:09:51:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134020 "" "Go-http-client/1.1" Oct 14 05:51:30 localhost podman[246584]: @ - - [14/Oct/2025:09:51:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16424 "" "Go-http-client/1.1" Oct 14 05:51:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42936 DF PROTO=TCP SPT=51274 DPT=9102 SEQ=306075435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7622C4690000000001030307) Oct 14 05:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:51:31 localhost podman[269362]: 2025-10-14 09:51:31.528275254 +0000 UTC m=+0.070727967 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:51:31 localhost podman[269362]: 2025-10-14 09:51:31.564267214 +0000 UTC m=+0.106719907 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 05:51:31 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:51:32 localhost python3[269479]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:51:32 localhost podman[269515]: Oct 14 05:51:32 localhost podman[269515]: 2025-10-14 09:51:32.754203006 +0000 UTC m=+0.080591031 container create c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 05:51:32 localhost podman[269515]: 2025-10-14 09:51:32.71011508 +0000 UTC m=+0.036503135 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 05:51:32 localhost python3[269479]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 05:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:51:33 localhost podman[269588]: 2025-10-14 09:51:33.170709923 +0000 UTC m=+0.073304676 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:51:33 localhost podman[269588]: 2025-10-14 09:51:33.184109201 +0000 UTC m=+0.086704014 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:51:33 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:51:33 localhost openstack_network_exporter[248748]: ERROR 09:51:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:51:33 localhost openstack_network_exporter[248748]: ERROR 09:51:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:51:33 localhost openstack_network_exporter[248748]: ERROR 09:51:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:51:33 localhost openstack_network_exporter[248748]: ERROR 09:51:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:51:33 localhost openstack_network_exporter[248748]: Oct 14 05:51:33 localhost openstack_network_exporter[248748]: ERROR 09:51:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:51:33 localhost openstack_network_exporter[248748]: Oct 14 05:51:34 localhost python3.9[269704]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:51:35 localhost python3.9[269816]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:36 localhost python3.9[269871]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:51:37 localhost python3.9[269980]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435496.697401-1337-76053642079044/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:51:37 localhost python3.9[270035]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:51:37 localhost systemd[1]: Reloading. Oct 14 05:51:38 localhost systemd-rc-local-generator[270058]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:51:38 localhost systemd-sysv-generator[270065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:51:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:51:39 localhost python3.9[270125]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:51:39 localhost systemd[1]: Reloading. Oct 14 05:51:39 localhost systemd-rc-local-generator[270150]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:51:39 localhost systemd-sysv-generator[270157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:51:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:51:40 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 14 05:51:40 localhost systemd[1]: Started libcrun container. Oct 14 05:51:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40333967a51e58f4bfee3500c9bd57bdc6112d89607ced229865b5cce3234b7e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 14 05:51:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40333967a51e58f4bfee3500c9bd57bdc6112d89607ced229865b5cce3234b7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 05:51:40 localhost podman[270166]: 2025-10-14 09:51:40.253601961 +0000 UTC m=+0.130607764 container init c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:51:40 localhost podman[270166]: 2025-10-14 09:51:40.262922469 +0000 UTC m=+0.139928272 container start c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 05:51:40 localhost podman[270166]: neutron_dhcp_agent Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + sudo -E kolla_set_configs Oct 14 05:51:40 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Validating config file Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Copying service configuration files Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Writing out command to execute Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/cd0de74397aa76b626744172300028943e2372ca220b3e27b1c7d2b66ff2832c Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: ++ cat /run_command Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + CMD=/usr/bin/neutron-dhcp-agent Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + ARGS= Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + sudo kolla_copy_cacerts Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + [[ ! -n '' ]] Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + . kolla_extend_start Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + umask 0022 Oct 14 05:51:40 localhost neutron_dhcp_agent[270181]: + exec /usr/bin/neutron-dhcp-agent Oct 14 05:51:41 localhost python3.9[270305]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:51:41 localhost systemd[1]: Stopping neutron_dhcp_agent container... Oct 14 05:51:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:51:41 localhost systemd[1]: tmp-crun.g6fqCC.mount: Deactivated successfully. Oct 14 05:51:41 localhost systemd[1]: libpod-c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206.scope: Deactivated successfully. Oct 14 05:51:41 localhost systemd[1]: libpod-c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206.scope: Consumed 1.249s CPU time. Oct 14 05:51:41 localhost podman[270319]: 2025-10-14 09:51:41.560590435 +0000 UTC m=+0.096652389 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 05:51:41 localhost podman[270309]: 2025-10-14 09:51:41.57394583 +0000 UTC m=+0.149887517 container died c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:51:41 localhost podman[270319]: 2025-10-14 09:51:41.598102405 +0000 UTC m=+0.134164339 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute) Oct 14 05:51:41 localhost podman[270309]: 2025-10-14 09:51:41.623633196 +0000 UTC m=+0.199574833 container cleanup c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 05:51:41 localhost podman[270309]: neutron_dhcp_agent Oct 14 05:51:41 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:51:41 localhost podman[270368]: error opening file `/run/crun/c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206/status`: No such file or directory Oct 14 05:51:41 localhost podman[270357]: 2025-10-14 09:51:41.725134002 +0000 UTC m=+0.064570703 container cleanup c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_dhcp, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 05:51:41 localhost podman[270357]: neutron_dhcp_agent Oct 14 05:51:41 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Oct 14 05:51:41 localhost systemd[1]: Stopped neutron_dhcp_agent container. Oct 14 05:51:41 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 14 05:51:41 localhost systemd[1]: Started libcrun container. Oct 14 05:51:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40333967a51e58f4bfee3500c9bd57bdc6112d89607ced229865b5cce3234b7e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 14 05:51:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40333967a51e58f4bfee3500c9bd57bdc6112d89607ced229865b5cce3234b7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 05:51:41 localhost podman[270370]: 2025-10-14 09:51:41.867134019 +0000 UTC m=+0.110643611 container init c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:51:41 localhost podman[270370]: 2025-10-14 09:51:41.876999873 +0000 UTC m=+0.120509455 container start c0b0aa4ef6b5cdc0d8a813ac63dd81cd6e7228469b681a76d2ece4cdf8007206 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6f71f1b3976210fc0eded1de5055d572867cd047981661e3e7d8a66efc3fc262'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=neutron_dhcp) Oct 14 05:51:41 localhost podman[270370]: neutron_dhcp_agent Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + sudo -E kolla_set_configs Oct 14 05:51:41 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Validating config file Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Copying service configuration files Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Writing out command to execute Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/cd0de74397aa76b626744172300028943e2372ca220b3e27b1c7d2b66ff2832c Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: ++ cat /run_command Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + CMD=/usr/bin/neutron-dhcp-agent Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + ARGS= Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + sudo kolla_copy_cacerts Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + [[ ! -n '' ]] Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + . kolla_extend_start Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + umask 0022 Oct 14 05:51:41 localhost neutron_dhcp_agent[270385]: + exec /usr/bin/neutron-dhcp-agent Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.173 270389 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.173 270389 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.553 270389 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.728 270389 INFO neutron.agent.dhcp.agent [None req-8659691f-e8ab-446a-aac7-f04b0c72c9c2 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.729 270389 INFO neutron.agent.dhcp.agent [None req-8659691f-e8ab-446a-aac7-f04b0c72c9c2 - - - - - -] Synchronizing state complete#033[00m Oct 14 05:51:43 localhost systemd[1]: session-60.scope: Deactivated successfully. Oct 14 05:51:43 localhost systemd[1]: session-60.scope: Consumed 36.335s CPU time. Oct 14 05:51:43 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Oct 14 05:51:43 localhost systemd-logind[760]: Removed session 60. Oct 14 05:51:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 09:51:43.823 270389 INFO neutron.agent.dhcp.agent [None req-8659691f-e8ab-446a-aac7-f04b0c72c9c2 - - - - - -] DHCP agent started#033[00m Oct 14 05:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:51:46 localhost podman[270419]: 2025-10-14 09:51:46.544027328 +0000 UTC m=+0.080018165 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:51:46 localhost podman[270419]: 2025-10-14 09:51:46.555122374 +0000 UTC m=+0.091113221 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:51:46 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:51:46 localhost podman[270418]: 2025-10-14 09:51:46.601557632 +0000 UTC m=+0.140758275 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Oct 14 05:51:46 localhost podman[270418]: 2025-10-14 09:51:46.612148654 +0000 UTC m=+0.151349337 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:51:46 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.160 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.189 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.190 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.190 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.190 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.191 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:51:47 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:47.327 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 05:51:47 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:47.329 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 05:51:47 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:47.331 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.683 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.885 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.886 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12774MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.887 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:51:47 localhost nova_compute[236479]: 2025-10-14 09:51:47.887 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.209 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.210 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.494 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.778 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.779 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.797 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.830 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NODE,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 05:51:48 localhost nova_compute[236479]: 2025-10-14 09:51:48.845 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:51:49 localhost nova_compute[236479]: 2025-10-14 09:51:49.352 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:51:49 localhost nova_compute[236479]: 2025-10-14 09:51:49.357 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:51:49 localhost nova_compute[236479]: 2025-10-14 09:51:49.384 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:51:49 localhost nova_compute[236479]: 2025-10-14 09:51:49.386 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:51:49 localhost nova_compute[236479]: 2025-10-14 09:51:49.387 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.388 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.388 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.389 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.413 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.414 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.414 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:52 localhost nova_compute[236479]: 2025-10-14 09:51:52.415 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:51:52 localhost podman[270503]: 2025-10-14 09:51:52.54153671 +0000 UTC m=+0.078197876 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 05:51:52 localhost podman[270503]: 2025-10-14 09:51:52.55052786 +0000 UTC m=+0.087189076 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:51:52 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:51:53 localhost nova_compute[236479]: 2025-10-14 09:51:53.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:53 localhost nova_compute[236479]: 2025-10-14 09:51:53.186 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24001 DF PROTO=TCP SPT=56636 DPT=9102 SEQ=596042296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76231DDD0000000001030307) Oct 14 05:51:54 localhost nova_compute[236479]: 2025-10-14 09:51:54.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:54 localhost nova_compute[236479]: 2025-10-14 09:51:54.165 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:51:54 localhost nova_compute[236479]: 2025-10-14 09:51:54.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:51:54 localhost podman[270520]: 2025-10-14 09:51:54.539755716 +0000 UTC m=+0.078478794 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:51:54 localhost podman[270520]: 2025-10-14 09:51:54.553138473 +0000 UTC m=+0.091861591 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:51:54 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:51:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24002 DF PROTO=TCP SPT=56636 DPT=9102 SEQ=596042296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762321E90000000001030307) Oct 14 05:51:55 localhost nova_compute[236479]: 2025-10-14 09:51:55.180 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:55 localhost nova_compute[236479]: 2025-10-14 09:51:55.181 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 05:51:56 localhost nova_compute[236479]: 2025-10-14 09:51:56.194 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:51:56 localhost nova_compute[236479]: 2025-10-14 09:51:56.195 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 05:51:56 localhost nova_compute[236479]: 2025-10-14 09:51:56.215 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 05:51:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24003 DF PROTO=TCP SPT=56636 DPT=9102 SEQ=596042296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762329EA0000000001030307) Oct 14 05:51:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:57.615 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:51:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:57.615 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:51:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:51:57.615 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:51:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:51:59 localhost podman[270539]: 2025-10-14 09:51:59.540642355 +0000 UTC m=+0.081987827 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git) Oct 14 05:51:59 localhost podman[270539]: 2025-10-14 09:51:59.551528106 +0000 UTC m=+0.092873578 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9) Oct 14 05:51:59 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:52:00 localhost podman[246584]: time="2025-10-14T09:52:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:52:00 localhost podman[246584]: @ - - [14/Oct/2025:09:52:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136325 "" "Go-http-client/1.1" Oct 14 05:52:00 localhost podman[246584]: @ - - [14/Oct/2025:09:52:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16865 "" "Go-http-client/1.1" Oct 14 05:52:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24004 DF PROTO=TCP SPT=56636 DPT=9102 SEQ=596042296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762339A90000000001030307) Oct 14 05:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:52:02 localhost podman[270560]: 2025-10-14 09:52:02.53408387 +0000 UTC m=+0.075406921 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 05:52:02 localhost podman[270560]: 2025-10-14 09:52:02.639299047 +0000 UTC m=+0.180622078 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:52:02 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:52:03 localhost openstack_network_exporter[248748]: ERROR 09:52:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:52:03 localhost openstack_network_exporter[248748]: ERROR 09:52:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:52:03 localhost openstack_network_exporter[248748]: ERROR 09:52:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:52:03 localhost openstack_network_exporter[248748]: ERROR 09:52:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:52:03 localhost openstack_network_exporter[248748]: Oct 14 05:52:03 localhost openstack_network_exporter[248748]: ERROR 09:52:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:52:03 localhost openstack_network_exporter[248748]: Oct 14 05:52:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:52:03 localhost podman[270585]: 2025-10-14 09:52:03.535982578 +0000 UTC m=+0.079439159 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:52:03 localhost podman[270585]: 2025-10-14 09:52:03.546109968 +0000 UTC m=+0.089566539 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:52:03 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:52:12 localhost systemd[1]: tmp-crun.nKxCPs.mount: Deactivated successfully. Oct 14 05:52:12 localhost podman[270607]: 2025-10-14 09:52:12.553927399 +0000 UTC m=+0.096371751 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:52:12 localhost podman[270607]: 2025-10-14 09:52:12.56405655 +0000 UTC m=+0.106500912 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:52:12 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:52:15 localhost nova_compute[236479]: 2025-10-14 09:52:15.220 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:52:17 localhost podman[270627]: 2025-10-14 09:52:17.542878906 +0000 UTC m=+0.085338745 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:52:17 localhost podman[270627]: 2025-10-14 09:52:17.551095908 +0000 UTC m=+0.093555697 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:52:17 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:52:17 localhost podman[270628]: 2025-10-14 09:52:17.598050767 +0000 UTC m=+0.137215377 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:52:17 localhost podman[270628]: 2025-10-14 09:52:17.634215733 +0000 UTC m=+0.173380373 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:52:17 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:52:21 localhost sshd[270668]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:52:23 localhost systemd[1]: tmp-crun.DE0T87.mount: Deactivated successfully. Oct 14 05:52:23 localhost podman[270671]: 2025-10-14 09:52:23.549330967 +0000 UTC m=+0.083444435 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:52:23 localhost podman[270671]: 2025-10-14 09:52:23.558919557 +0000 UTC m=+0.093033045 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 05:52:23 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:52:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31179 DF PROTO=TCP SPT=55054 DPT=9102 SEQ=3483739821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7623930D0000000001030307) Oct 14 05:52:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31180 DF PROTO=TCP SPT=55054 DPT=9102 SEQ=3483739821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762397290000000001030307) Oct 14 05:52:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:52:25 localhost podman[270690]: 2025-10-14 09:52:25.538134977 +0000 UTC m=+0.080416993 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:52:25 localhost podman[270690]: 2025-10-14 09:52:25.550515252 +0000 UTC m=+0.092797288 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Oct 14 05:52:25 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:52:25 localhost sshd[270711]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31181 DF PROTO=TCP SPT=55054 DPT=9102 SEQ=3483739821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76239F290000000001030307) Oct 14 05:52:29 localhost sshd[270714]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:52:30 localhost podman[270717]: 2025-10-14 09:52:30.550488651 +0000 UTC m=+0.085783697 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:52:30 localhost podman[270717]: 2025-10-14 09:52:30.563074041 +0000 UTC m=+0.098369067 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 05:52:30 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:52:30 localhost podman[246584]: time="2025-10-14T09:52:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:52:30 localhost podman[246584]: @ - - [14/Oct/2025:09:52:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136325 "" "Go-http-client/1.1" Oct 14 05:52:30 localhost podman[246584]: @ - - [14/Oct/2025:09:52:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16867 "" "Go-http-client/1.1" Oct 14 05:52:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31182 DF PROTO=TCP SPT=55054 DPT=9102 SEQ=3483739821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7623AEE90000000001030307) Oct 14 05:52:32 localhost sshd[270737]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:52:33 localhost podman[270739]: 2025-10-14 09:52:33.03769194 +0000 UTC m=+0.079712693 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible) Oct 14 05:52:33 localhost podman[270739]: 2025-10-14 09:52:33.120594549 +0000 UTC m=+0.162615272 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 05:52:33 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:52:33 localhost openstack_network_exporter[248748]: ERROR 09:52:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:52:33 localhost openstack_network_exporter[248748]: ERROR 09:52:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:52:33 localhost openstack_network_exporter[248748]: ERROR 09:52:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:52:33 localhost openstack_network_exporter[248748]: ERROR 09:52:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:52:33 localhost openstack_network_exporter[248748]: Oct 14 05:52:33 localhost openstack_network_exporter[248748]: ERROR 09:52:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:52:33 localhost openstack_network_exporter[248748]: Oct 14 05:52:34 localhost sshd[270848]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:52:34 localhost systemd[1]: tmp-crun.dAjIQN.mount: Deactivated successfully. Oct 14 05:52:34 localhost podman[270852]: 2025-10-14 09:52:34.520020932 +0000 UTC m=+0.093803334 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:52:34 localhost podman[270852]: 2025-10-14 09:52:34.535263634 +0000 UTC m=+0.109046076 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:52:34 localhost systemd-logind[760]: New session 61 of user zuul. Oct 14 05:52:34 localhost systemd[1]: Started Session 61 of User zuul. Oct 14 05:52:34 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:52:35 localhost python3.9[271020]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:52:36 localhost sshd[271042]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:36 localhost python3.9[271136]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:37 localhost python3.9[271246]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:38 localhost python3.9[271374]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:38 localhost python3.9[271484]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:52:39 localhost python3.9[271594]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:39 localhost sshd[271650]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:40 localhost python3.9[271706]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:52:41 localhost python3.9[271818]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:52:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5668 writes, 25K keys, 5668 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5668 writes, 713 syncs, 7.95 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10 writes, 20 keys, 10 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 10 writes, 5 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:52:42 localhost python3.9[271930]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:52:42 localhost network[271947]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:52:42 localhost network[271948]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:52:42 localhost network[271949]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:52:43 localhost podman[271956]: 2025-10-14 09:52:43.07579369 +0000 UTC m=+0.087431592 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 05:52:43 localhost podman[271956]: 2025-10-14 09:52:43.110118497 +0000 UTC m=+0.121756369 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 05:52:43 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:52:43 localhost sshd[271985]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 05:52:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4849 writes, 21K keys, 4849 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4849 writes, 664 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10 writes, 20 keys, 10 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 10 writes, 5 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 05:52:46 localhost sshd[272114]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.188 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.189 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.189 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.190 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.190 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.644 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:52:47 localhost systemd[1]: tmp-crun.ZawNMb.mount: Deactivated successfully. Oct 14 05:52:47 localhost podman[272140]: 2025-10-14 09:52:47.778861731 +0000 UTC m=+0.103613289 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:52:47 localhost podman[272140]: 2025-10-14 09:52:47.790112995 +0000 UTC m=+0.114864533 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:52:47 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:52:47 localhost podman[272139]: 2025-10-14 09:52:47.868007259 +0000 UTC m=+0.191923804 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.875 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.877 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12777MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.877 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:52:47 localhost nova_compute[236479]: 2025-10-14 09:52:47.878 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:52:47 localhost podman[272139]: 2025-10-14 09:52:47.902350486 +0000 UTC m=+0.226267031 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:52:47 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.004 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.005 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.035 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.485 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.492 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.507 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.510 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:52:48 localhost nova_compute[236479]: 2025-10-14 09:52:48.510 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:52:49 localhost python3.9[272293]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:52:49 localhost nova_compute[236479]: 2025-10-14 09:52:49.511 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:49 localhost nova_compute[236479]: 2025-10-14 09:52:49.512 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:52:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:52:50 localhost nova_compute[236479]: 2025-10-14 09:52:50.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:50 localhost python3.9[272403]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:52:50 localhost sshd[272423]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:51 localhost python3.9[272517]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:52:51 localhost python3.9[272627]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:52 localhost nova_compute[236479]: 2025-10-14 09:52:52.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:52 localhost nova_compute[236479]: 2025-10-14 09:52:52.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:52 localhost python3.9[272737]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:52:53 localhost python3.9[272794]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:52:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10608 DF PROTO=TCP SPT=40122 DPT=9102 SEQ=736156493 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7624083E0000000001030307) Oct 14 05:52:53 localhost podman[272905]: 2025-10-14 09:52:53.844527742 +0000 UTC m=+0.085036499 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:52:53 localhost sshd[272919]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:53 localhost podman[272905]: 2025-10-14 09:52:53.85890102 +0000 UTC m=+0.099409787 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251009) Oct 14 05:52:53 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:52:53 localhost python3.9[272904]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:52:54 localhost nova_compute[236479]: 2025-10-14 09:52:54.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:54 localhost nova_compute[236479]: 2025-10-14 09:52:54.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:52:54 localhost nova_compute[236479]: 2025-10-14 09:52:54.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:52:54 localhost nova_compute[236479]: 2025-10-14 09:52:54.182 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:52:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10609 DF PROTO=TCP SPT=40122 DPT=9102 SEQ=736156493 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76240C290000000001030307) Oct 14 05:52:55 localhost nova_compute[236479]: 2025-10-14 09:52:55.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:55 localhost nova_compute[236479]: 2025-10-14 09:52:55.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:52:55 localhost nova_compute[236479]: 2025-10-14 09:52:55.165 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:52:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:52:55 localhost systemd[1]: tmp-crun.6YRfrf.mount: Deactivated successfully. Oct 14 05:52:55 localhost podman[272983]: 2025-10-14 09:52:55.77626833 +0000 UTC m=+0.094792661 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible) Oct 14 05:52:55 localhost podman[272983]: 2025-10-14 09:52:55.786774254 +0000 UTC m=+0.105298575 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS) Oct 14 05:52:55 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:52:55 localhost python3.9[272982]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:52:56 localhost python3.9[273111]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:52:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10610 DF PROTO=TCP SPT=40122 DPT=9102 SEQ=736156493 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762414290000000001030307) Oct 14 05:52:57 localhost sshd[273183]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:52:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:52:57.616 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:52:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:52:57.616 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:52:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:52:57.617 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:52:58 localhost python3.9[273223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:52:58 localhost python3.9[273280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:52:59 localhost python3.9[273390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:52:59 localhost python3.9[273447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:00 localhost podman[246584]: time="2025-10-14T09:53:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:53:00 localhost podman[246584]: @ - - [14/Oct/2025:09:53:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136325 "" "Go-http-client/1.1" Oct 14 05:53:00 localhost podman[246584]: @ - - [14/Oct/2025:09:53:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16869 "" "Go-http-client/1.1" Oct 14 05:53:00 localhost python3.9[273557]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:53:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:53:00 localhost systemd[1]: Reloading. Oct 14 05:53:00 localhost podman[273559]: 2025-10-14 09:53:00.781837381 +0000 UTC m=+0.084254946 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:53:00 localhost systemd-sysv-generator[273603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:53:00 localhost systemd-rc-local-generator[273600]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:53:00 localhost podman[273559]: 2025-10-14 09:53:00.820140916 +0000 UTC m=+0.122558541 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 05:53:00 localhost sshd[273612]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:53:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10611 DF PROTO=TCP SPT=40122 DPT=9102 SEQ=736156493 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762423E90000000001030307) Oct 14 05:53:01 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:53:01 localhost python3.9[273725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:02 localhost python3.9[273782]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:03 localhost python3.9[273892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:53:03 localhost openstack_network_exporter[248748]: ERROR 09:53:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:53:03 localhost openstack_network_exporter[248748]: ERROR 09:53:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:53:03 localhost openstack_network_exporter[248748]: ERROR 09:53:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:53:03 localhost openstack_network_exporter[248748]: ERROR 09:53:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:53:03 localhost openstack_network_exporter[248748]: Oct 14 05:53:03 localhost openstack_network_exporter[248748]: ERROR 09:53:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:53:03 localhost openstack_network_exporter[248748]: Oct 14 05:53:03 localhost podman[273895]: 2025-10-14 09:53:03.36196581 +0000 UTC m=+0.117366541 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:53:03 localhost podman[273895]: 2025-10-14 09:53:03.430208913 +0000 UTC m=+0.185609694 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:53:03 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:53:03 localhost python3.9[273972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:04 localhost sshd[274083]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:04 localhost python3.9[274082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:53:04 localhost systemd[1]: Reloading. Oct 14 05:53:04 localhost systemd-sysv-generator[274111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:53:04 localhost systemd-rc-local-generator[274106]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:53:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:53:04 localhost systemd[1]: Starting Create netns directory... Oct 14 05:53:04 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:53:04 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:53:04 localhost systemd[1]: Finished Create netns directory. Oct 14 05:53:04 localhost podman[274122]: 2025-10-14 09:53:04.919467942 +0000 UTC m=+0.085153500 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:53:04 localhost podman[274122]: 2025-10-14 09:53:04.956188844 +0000 UTC m=+0.121874382 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:53:04 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:53:06 localhost python3.9[274257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:07 localhost sshd[274291]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:08 localhost python3.9[274369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:08 localhost python3.9[274426]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/iscsid/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/iscsid/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:10 localhost python3.9[274536]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:11 localhost python3.9[274646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:11 localhost sshd[274703]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:11 localhost python3.9[274705]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/iscsid.json _original_basename=.8o8cy6em recurse=False state=file path=/var/lib/kolla/config_files/iscsid.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:12 localhost python3.9[274815]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:53:13 localhost systemd[1]: tmp-crun.pSYO22.mount: Deactivated successfully. Oct 14 05:53:13 localhost podman[274983]: 2025-10-14 09:53:13.452459236 +0000 UTC m=+0.103392504 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 05:53:13 localhost podman[274983]: 2025-10-14 09:53:13.491003557 +0000 UTC m=+0.141936855 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:53:13 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:53:14 localhost sshd[275112]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:14 localhost python3.9[275111]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 14 05:53:15 localhost python3.9[275223]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:53:16 localhost python3.9[275333]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:53:18 localhost sshd[275378]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:53:18 localhost systemd[1]: tmp-crun.D8EymQ.mount: Deactivated successfully. Oct 14 05:53:18 localhost podman[275380]: 2025-10-14 09:53:18.374990924 +0000 UTC m=+0.088445719 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 14 05:53:18 localhost podman[275380]: 2025-10-14 09:53:18.410240156 +0000 UTC m=+0.123694941 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:53:18 localhost systemd[1]: tmp-crun.V5a2Fa.mount: Deactivated successfully. Oct 14 05:53:18 localhost podman[275381]: 2025-10-14 09:53:18.421825329 +0000 UTC m=+0.132762417 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:53:18 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:53:18 localhost podman[275381]: 2025-10-14 09:53:18.457273197 +0000 UTC m=+0.168210275 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:53:18 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:53:21 localhost sshd[275513]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:21 localhost python3[275512]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:53:21 localhost python3[275512]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "4f44a4f5e0315c0d3dbd533e21d0927bf0518cf452942382901ff1ff9d621cbd",#012 "Digest": "sha256:2975c6e807fa09f0e2062da08d3a0bb209ca055d73011ebb91164def554f60aa",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:2975c6e807fa09f0e2062da08d3a0bb209ca055d73011ebb91164def554f60aa"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-14T06:14:08.154480843Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403858061,#012 "VirtualSize": 403858061,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/1b94024f0eaacdff3ae200e2172324d7aec107282443f6fc22fe2f0287bc90ec/diff:/var/lib/containers/storage/overlay/0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/9c7bc0417a3c6c9361659b5f2f41d814b152f8a47a3821564971debd2b788997/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2896905ce9321c1f2feb1f3ada413e86eda3444455358ab965478a041351b392",#012 "sha256:f640179b0564dc7abbe22bd39fc8810d5bbb8e54094fe7ebc5b3c45b658c4983",#012 "sha256:f004953af60f7a99c360488169b0781a154164be09dce508bd68d57932c60f8f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-14T06:08:54.969219151Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969253522Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969285133Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969308103Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969342284Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969363945Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:55.340499198Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:32.389605838Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:35.587912811Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 14 05:53:22 localhost python3.9[275685]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56057 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3075371713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76247D6E0000000001030307) Oct 14 05:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:53:24 localhost podman[275798]: 2025-10-14 09:53:24.113071907 +0000 UTC m=+0.090490374 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 05:53:24 localhost podman[275798]: 2025-10-14 09:53:24.149107361 +0000 UTC m=+0.126525788 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:53:24 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:53:24 localhost python3.9[275797]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:24 localhost python3.9[275871]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56058 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3075371713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762481690000000001030307) Oct 14 05:53:25 localhost sshd[275926]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:25 localhost python3.9[275982]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435604.7984807-986-137612135230199/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:53:26 localhost podman[276038]: 2025-10-14 09:53:26.159636878 +0000 UTC m=+0.082809838 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 05:53:26 localhost podman[276038]: 2025-10-14 09:53:26.174136649 +0000 UTC m=+0.097309619 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 05:53:26 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:53:26 localhost python3.9[276037]: ansible-systemd Invoked with state=started name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:53:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56059 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3075371713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762489690000000001030307) Oct 14 05:53:28 localhost python3.9[276168]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:28 localhost sshd[276226]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:29 localhost python3.9[276282]: ansible-ansible.builtin.systemd Invoked with name=edpm_iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:53:30 localhost systemd[1]: Stopping iscsid container... Oct 14 05:53:30 localhost iscsid[215814]: iscsid shutting down. Oct 14 05:53:30 localhost systemd[1]: libpod-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.scope: Deactivated successfully. Oct 14 05:53:30 localhost podman[276286]: 2025-10-14 09:53:30.168080031 +0000 UTC m=+0.072040527 container died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=iscsid) Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.timer: Deactivated successfully. Oct 14 05:53:30 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Failed to open /run/systemd/transient/46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: No such file or directory Oct 14 05:53:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be-userdata-shm.mount: Deactivated successfully. Oct 14 05:53:30 localhost systemd[1]: var-lib-containers-storage-overlay-db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b-merged.mount: Deactivated successfully. Oct 14 05:53:30 localhost podman[276286]: 2025-10-14 09:53:30.270146877 +0000 UTC m=+0.174107323 container cleanup 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:53:30 localhost podman[276286]: iscsid Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.timer: Failed to open /run/systemd/transient/46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.timer: No such file or directory Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Failed to open /run/systemd/transient/46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: No such file or directory Oct 14 05:53:30 localhost podman[276315]: 2025-10-14 09:53:30.365389739 +0000 UTC m=+0.064863042 container cleanup 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 05:53:30 localhost podman[276315]: iscsid Oct 14 05:53:30 localhost systemd[1]: edpm_iscsid.service: Deactivated successfully. Oct 14 05:53:30 localhost systemd[1]: Stopped iscsid container. Oct 14 05:53:30 localhost systemd[1]: Starting iscsid container... Oct 14 05:53:30 localhost systemd[1]: Started libcrun container. Oct 14 05:53:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1b7c89f975e4a66a34c85e8759daed9307412fcb76862c9bc8708564b81e4b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.timer: Failed to open /run/systemd/transient/46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.timer: No such file or directory Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Failed to open /run/systemd/transient/46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: No such file or directory Oct 14 05:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:53:30 localhost podman[276328]: 2025-10-14 09:53:30.537977861 +0000 UTC m=+0.141183894 container init 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:53:30 localhost iscsid[276340]: + sudo -E kolla_set_configs Oct 14 05:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:53:30 localhost podman[276328]: 2025-10-14 09:53:30.577000634 +0000 UTC m=+0.180206627 container start 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 05:53:30 localhost podman[276328]: iscsid Oct 14 05:53:30 localhost systemd[1]: Started iscsid container. Oct 14 05:53:30 localhost systemd[1]: Created slice User Slice of UID 0. Oct 14 05:53:30 localhost podman[246584]: time="2025-10-14T09:53:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:53:30 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 14 05:53:30 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 14 05:53:30 localhost podman[246584]: @ - - [14/Oct/2025:09:53:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136324 "" "Go-http-client/1.1" Oct 14 05:53:30 localhost systemd[1]: Starting User Manager for UID 0... Oct 14 05:53:30 localhost podman[246584]: @ - - [14/Oct/2025:09:53:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16859 "" "Go-http-client/1.1" Oct 14 05:53:30 localhost podman[276348]: 2025-10-14 09:53:30.72126491 +0000 UTC m=+0.139241631 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3) Oct 14 05:53:30 localhost podman[276348]: 2025-10-14 09:53:30.75496546 +0000 UTC m=+0.172942171 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 05:53:30 localhost podman[276348]: unhealthy Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Main process exited, code=exited, status=1/FAILURE Oct 14 05:53:30 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Failed with result 'exit-code'. Oct 14 05:53:30 localhost systemd[276355]: Queued start job for default target Main User Target. Oct 14 05:53:30 localhost systemd[276355]: Created slice User Application Slice. Oct 14 05:53:30 localhost systemd[276355]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 14 05:53:30 localhost systemd[276355]: Started Daily Cleanup of User's Temporary Directories. Oct 14 05:53:30 localhost systemd[276355]: Reached target Paths. Oct 14 05:53:30 localhost systemd[276355]: Reached target Timers. Oct 14 05:53:30 localhost systemd[276355]: Starting D-Bus User Message Bus Socket... Oct 14 05:53:30 localhost systemd[276355]: Starting Create User's Volatile Files and Directories... Oct 14 05:53:30 localhost systemd[276355]: Listening on D-Bus User Message Bus Socket. Oct 14 05:53:30 localhost systemd[276355]: Reached target Sockets. Oct 14 05:53:30 localhost systemd[276355]: Finished Create User's Volatile Files and Directories. Oct 14 05:53:30 localhost systemd[276355]: Reached target Basic System. Oct 14 05:53:30 localhost systemd[276355]: Reached target Main User Target. Oct 14 05:53:30 localhost systemd[276355]: Startup finished in 121ms. Oct 14 05:53:30 localhost systemd[1]: Started User Manager for UID 0. Oct 14 05:53:30 localhost systemd[1]: Started Session c15 of User root. Oct 14 05:53:30 localhost iscsid[276340]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:53:30 localhost iscsid[276340]: INFO:__main__:Validating config file Oct 14 05:53:30 localhost iscsid[276340]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:53:30 localhost iscsid[276340]: INFO:__main__:Writing out command to execute Oct 14 05:53:30 localhost systemd[1]: session-c15.scope: Deactivated successfully. Oct 14 05:53:30 localhost iscsid[276340]: ++ cat /run_command Oct 14 05:53:30 localhost iscsid[276340]: + CMD='/usr/sbin/iscsid -f' Oct 14 05:53:30 localhost iscsid[276340]: + ARGS= Oct 14 05:53:30 localhost iscsid[276340]: + sudo kolla_copy_cacerts Oct 14 05:53:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56060 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3075371713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762499290000000001030307) Oct 14 05:53:30 localhost systemd[1]: Started Session c16 of User root. Oct 14 05:53:30 localhost iscsid[276340]: + [[ ! -n '' ]] Oct 14 05:53:30 localhost systemd[1]: session-c16.scope: Deactivated successfully. Oct 14 05:53:30 localhost iscsid[276340]: + . kolla_extend_start Oct 14 05:53:30 localhost iscsid[276340]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 14 05:53:30 localhost iscsid[276340]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 14 05:53:30 localhost iscsid[276340]: Running command: '/usr/sbin/iscsid -f' Oct 14 05:53:30 localhost iscsid[276340]: + umask 0022 Oct 14 05:53:30 localhost iscsid[276340]: + exec /usr/sbin/iscsid -f Oct 14 05:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:53:31 localhost systemd[1]: tmp-crun.6KlG5Y.mount: Deactivated successfully. Oct 14 05:53:31 localhost podman[276498]: 2025-10-14 09:53:31.231776628 +0000 UTC m=+0.089160710 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:53:31 localhost podman[276498]: 2025-10-14 09:53:31.244757277 +0000 UTC m=+0.102141369 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible) Oct 14 05:53:31 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:53:31 localhost python3.9[276499]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:31 localhost sshd[276537]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:32 localhost python3.9[276631]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:53:32 localhost network[276648]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:53:32 localhost network[276649]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:53:32 localhost network[276650]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:53:33 localhost openstack_network_exporter[248748]: ERROR 09:53:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:53:33 localhost openstack_network_exporter[248748]: ERROR 09:53:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:53:33 localhost openstack_network_exporter[248748]: ERROR 09:53:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:53:33 localhost openstack_network_exporter[248748]: ERROR 09:53:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:53:33 localhost openstack_network_exporter[248748]: Oct 14 05:53:33 localhost openstack_network_exporter[248748]: ERROR 09:53:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:53:33 localhost openstack_network_exporter[248748]: Oct 14 05:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:53:33 localhost podman[276669]: 2025-10-14 09:53:33.550922668 +0000 UTC m=+0.074565154 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 05:53:33 localhost podman[276669]: 2025-10-14 09:53:33.595253315 +0000 UTC m=+0.118895831 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0) Oct 14 05:53:33 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:53:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:53:35 localhost podman[276768]: 2025-10-14 09:53:35.088123442 +0000 UTC m=+0.083907527 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:53:35 localhost podman[276768]: 2025-10-14 09:53:35.10211035 +0000 UTC m=+0.097894405 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:53:35 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:53:35 localhost sshd[276805]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:38 localhost sshd[277006]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:38 localhost python3.9[277005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:53:39 localhost podman[277195]: Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.692708355 +0000 UTC m=+0.076125217 container create 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, GIT_BRANCH=main, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph) Oct 14 05:53:39 localhost systemd[1]: Started libpod-conmon-059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e.scope. Oct 14 05:53:39 localhost systemd[1]: Started libcrun container. Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.749434726 +0000 UTC m=+0.132851568 container init 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55) Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.760967148 +0000 UTC m=+0.144384020 container start 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.761276196 +0000 UTC m=+0.144693038 container attach 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, version=7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=) Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.662821087 +0000 UTC m=+0.046237969 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 05:53:39 localhost loving_albattani[277211]: 167 167 Oct 14 05:53:39 localhost systemd[1]: libpod-059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e.scope: Deactivated successfully. Oct 14 05:53:39 localhost podman[277195]: 2025-10-14 09:53:39.764700009 +0000 UTC m=+0.148116931 container died 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 05:53:39 localhost python3.9[277202]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 14 05:53:39 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 14 05:53:39 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:53:39 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:53:39 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:53:39 localhost podman[277216]: 2025-10-14 09:53:39.896478228 +0000 UTC m=+0.120952758 container remove 059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_albattani, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Oct 14 05:53:39 localhost systemd[1]: libpod-conmon-059dc4979cf5d07022f1be4306ad874c62c136b01758cc9f2b775df50805d37e.scope: Deactivated successfully. Oct 14 05:53:40 localhost podman[277255]: Oct 14 05:53:40 localhost podman[277255]: 2025-10-14 09:53:40.082923823 +0000 UTC m=+0.075903071 container create d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, GIT_BRANCH=main, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True) Oct 14 05:53:40 localhost systemd[1]: Started libpod-conmon-d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc.scope. Oct 14 05:53:40 localhost systemd[1]: Started libcrun container. Oct 14 05:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66aeb798736d4be04a09f5631cdfdef8d7fc061e7fdfd9a36659eadff2f86ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66aeb798736d4be04a09f5631cdfdef8d7fc061e7fdfd9a36659eadff2f86ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:40 localhost podman[277255]: 2025-10-14 09:53:40.052323166 +0000 UTC m=+0.045302454 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 05:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66aeb798736d4be04a09f5631cdfdef8d7fc061e7fdfd9a36659eadff2f86ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66aeb798736d4be04a09f5631cdfdef8d7fc061e7fdfd9a36659eadff2f86ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 05:53:40 localhost podman[277255]: 2025-10-14 09:53:40.155379229 +0000 UTC m=+0.148358457 container init d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Oct 14 05:53:40 localhost podman[277255]: 2025-10-14 09:53:40.162905103 +0000 UTC m=+0.155884361 container start d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, version=7) Oct 14 05:53:40 localhost podman[277255]: 2025-10-14 09:53:40.163132869 +0000 UTC m=+0.156112117 container attach d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 05:53:40 localhost python3.9[277369]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:40 localhost systemd[1]: var-lib-containers-storage-overlay-016952d1831c3145479ad3f25e7a20330683f65add5dabc4275e36409b7228ba-merged.mount: Deactivated successfully. Oct 14 05:53:41 localhost python3.9[278064]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:41 localhost systemd[1]: Stopping User Manager for UID 0... Oct 14 05:53:41 localhost systemd[276355]: Activating special unit Exit the Session... Oct 14 05:53:41 localhost systemd[276355]: Stopped target Main User Target. Oct 14 05:53:41 localhost systemd[276355]: Stopped target Basic System. Oct 14 05:53:41 localhost systemd[276355]: Stopped target Paths. Oct 14 05:53:41 localhost systemd[276355]: Stopped target Sockets. Oct 14 05:53:41 localhost systemd[276355]: Stopped target Timers. Oct 14 05:53:41 localhost systemd[276355]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 05:53:41 localhost systemd[276355]: Closed D-Bus User Message Bus Socket. Oct 14 05:53:41 localhost systemd[276355]: Stopped Create User's Volatile Files and Directories. Oct 14 05:53:41 localhost systemd[276355]: Removed slice User Application Slice. Oct 14 05:53:41 localhost systemd[276355]: Reached target Shutdown. Oct 14 05:53:41 localhost systemd[276355]: Finished Exit the Session. Oct 14 05:53:41 localhost systemd[276355]: Reached target Exit the Session. Oct 14 05:53:41 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 14 05:53:41 localhost systemd[1]: Stopped User Manager for UID 0. Oct 14 05:53:41 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 14 05:53:41 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 14 05:53:41 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 14 05:53:41 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 14 05:53:41 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: [ Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: { Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "available": false, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "ceph_device": false, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "lsm_data": {}, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "lvs": [], Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "path": "/dev/sr0", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "rejected_reasons": [ Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "Has a FileSystem", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "Insufficient space (<5GB)" Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: ], Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "sys_api": { Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "actuators": null, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "device_nodes": "sr0", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "human_readable_size": "482.00 KB", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "id_bus": "ata", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "model": "QEMU DVD-ROM", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "nr_requests": "2", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "partitions": {}, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "path": "/dev/sr0", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "removable": "1", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "rev": "2.5+", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "ro": "0", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "rotational": "1", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "sas_address": "", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "sas_device_handle": "", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "scheduler_mode": "mq-deadline", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "sectors": 0, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "sectorsize": "2048", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "size": 493568.0, Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "support_discard": "0", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "type": "disk", Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: "vendor": "QEMU" Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: } Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: } Oct 14 05:53:41 localhost vibrant_goldwasser[277286]: ] Oct 14 05:53:41 localhost systemd[1]: libpod-d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc.scope: Deactivated successfully. Oct 14 05:53:41 localhost systemd[1]: libpod-d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc.scope: Consumed 1.082s CPU time. Oct 14 05:53:41 localhost podman[277255]: 2025-10-14 09:53:41.25550918 +0000 UTC m=+1.248488468 container died d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 05:53:41 localhost systemd[1]: tmp-crun.WMUxpM.mount: Deactivated successfully. Oct 14 05:53:41 localhost podman[279413]: 2025-10-14 09:53:41.360550537 +0000 UTC m=+0.093885957 container remove d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_goldwasser, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Oct 14 05:53:41 localhost systemd[1]: libpod-conmon-d3a583110750c6b74db35fd0c755c1df1ab657ca0dbf87172c573356c039b1bc.scope: Deactivated successfully. Oct 14 05:53:41 localhost systemd[1]: var-lib-containers-storage-overlay-66aeb798736d4be04a09f5631cdfdef8d7fc061e7fdfd9a36659eadff2f86ec3-merged.mount: Deactivated successfully. Oct 14 05:53:41 localhost python3.9[279519]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:42 localhost sshd[279555]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:43 localhost python3.9[279649]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:53:44 localhost systemd[1]: tmp-crun.lL8dBl.mount: Deactivated successfully. Oct 14 05:53:44 localhost podman[279760]: 2025-10-14 09:53:44.062865497 +0000 UTC m=+0.085630894 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:53:44 localhost podman[279760]: 2025-10-14 09:53:44.073103604 +0000 UTC m=+0.095869001 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 05:53:44 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:53:44 localhost python3.9[279759]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:45 localhost sshd[279892]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:45 localhost python3.9[279891]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:46 localhost python3.9[280005]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:53:47 localhost python3.9[280116]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:53:48 localhost podman[280188]: 2025-10-14 09:53:48.559034251 +0000 UTC m=+0.098058079 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:53:48 localhost podman[280188]: 2025-10-14 09:53:48.595109765 +0000 UTC m=+0.134133593 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:53:48 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:53:48 localhost podman[280241]: 2025-10-14 09:53:48.658743673 +0000 UTC m=+0.089714464 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:53:48 localhost podman[280241]: 2025-10-14 09:53:48.670540262 +0000 UTC m=+0.101511133 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:53:48 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:53:48 localhost python3.9[280255]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:49 localhost sshd[280337]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.248 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.249 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.250 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.250 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.250 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:53:49 localhost python3.9[280377]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.732 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.937 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.939 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12753MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.939 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:53:49 localhost nova_compute[236479]: 2025-10-14 09:53:49.939 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.093 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.093 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.122 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:53:50 localhost python3.9[280510]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.594 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.601 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.630 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.632 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:53:50 localhost nova_compute[236479]: 2025-10-14 09:53:50.633 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:53:50 localhost python3.9[280642]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:51 localhost python3.9[280752]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:53:51 localhost nova_compute[236479]: 2025-10-14 09:53:51.629 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:51 localhost nova_compute[236479]: 2025-10-14 09:53:51.630 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:52 localhost sshd[280865]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:52 localhost python3.9[280864]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:53 localhost python3.9[280976]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:53 localhost python3.9[281033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17960 DF PROTO=TCP SPT=50746 DPT=9102 SEQ=1384873323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7624F29E0000000001030307) Oct 14 05:53:54 localhost nova_compute[236479]: 2025-10-14 09:53:54.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:54 localhost nova_compute[236479]: 2025-10-14 09:53:54.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:54 localhost python3.9[281143]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17961 DF PROTO=TCP SPT=50746 DPT=9102 SEQ=1384873323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7624F6AA0000000001030307) Oct 14 05:53:55 localhost python3.9[281200]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:53:56 localhost sshd[281311]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:56 localhost python3.9[281310]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:56 localhost nova_compute[236479]: 2025-10-14 09:53:56.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:56 localhost nova_compute[236479]: 2025-10-14 09:53:56.165 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:53:56 localhost nova_compute[236479]: 2025-10-14 09:53:56.165 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:53:56 localhost nova_compute[236479]: 2025-10-14 09:53:56.184 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:53:56 localhost podman[281330]: 2025-10-14 09:53:56.321876855 +0000 UTC m=+0.088880882 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 05:53:56 localhost podman[281330]: 2025-10-14 09:53:56.331537975 +0000 UTC m=+0.098541932 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:53:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:53:56 localhost python3.9[281441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17962 DF PROTO=TCP SPT=50746 DPT=9102 SEQ=1384873323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7624FEA90000000001030307) Oct 14 05:53:57 localhost nova_compute[236479]: 2025-10-14 09:53:57.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:57 localhost nova_compute[236479]: 2025-10-14 09:53:57.188 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:57 localhost nova_compute[236479]: 2025-10-14 09:53:57.188 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:53:57 localhost nova_compute[236479]: 2025-10-14 09:53:57.189 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:53:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:53:57.617 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:53:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:53:57.618 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:53:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:53:57.618 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:53:57 localhost python3.9[281498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:58 localhost python3.9[281608]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:53:59 localhost python3.9[281665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:53:59 localhost sshd[281737]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:53:59 localhost python3.9[281777]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:53:59 localhost systemd[1]: Reloading. Oct 14 05:54:00 localhost systemd-rc-local-generator[281799]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:54:00 localhost systemd-sysv-generator[281802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:54:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:54:00 localhost podman[246584]: time="2025-10-14T09:54:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:54:00 localhost podman[246584]: @ - - [14/Oct/2025:09:54:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136324 "" "Go-http-client/1.1" Oct 14 05:54:00 localhost podman[246584]: @ - - [14/Oct/2025:09:54:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16858 "" "Go-http-client/1.1" Oct 14 05:54:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17963 DF PROTO=TCP SPT=50746 DPT=9102 SEQ=1384873323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76250E690000000001030307) Oct 14 05:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:54:01 localhost systemd[1]: tmp-crun.NN6nxo.mount: Deactivated successfully. Oct 14 05:54:01 localhost podman[281926]: 2025-10-14 09:54:01.097015172 +0000 UTC m=+0.091740708 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 05:54:01 localhost podman[281926]: 2025-10-14 09:54:01.135291276 +0000 UTC m=+0.130016762 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:54:01 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:54:01 localhost python3.9[281925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:54:01 localhost podman[282000]: 2025-10-14 09:54:01.550203731 +0000 UTC m=+0.087924165 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:54:01 localhost podman[282000]: 2025-10-14 09:54:01.567234061 +0000 UTC m=+0.104954535 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350) Oct 14 05:54:01 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:54:01 localhost python3.9[282006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:02 localhost python3.9[282130]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:54:02 localhost python3.9[282187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:02 localhost sshd[282188]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:03 localhost openstack_network_exporter[248748]: ERROR 09:54:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:54:03 localhost openstack_network_exporter[248748]: ERROR 09:54:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:54:03 localhost openstack_network_exporter[248748]: ERROR 09:54:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:54:03 localhost openstack_network_exporter[248748]: ERROR 09:54:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:54:03 localhost openstack_network_exporter[248748]: Oct 14 05:54:03 localhost openstack_network_exporter[248748]: ERROR 09:54:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:54:03 localhost openstack_network_exporter[248748]: Oct 14 05:54:03 localhost python3.9[282299]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:54:03 localhost systemd[1]: Reloading. Oct 14 05:54:03 localhost podman[282301]: 2025-10-14 09:54:03.874744228 +0000 UTC m=+0.086670672 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 05:54:03 localhost systemd-sysv-generator[282344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:54:03 localhost systemd-rc-local-generator[282340]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:54:03 localhost podman[282301]: 2025-10-14 09:54:03.945154669 +0000 UTC m=+0.157081143 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251009) Oct 14 05:54:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:54:04 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:54:04 localhost systemd[1]: Starting Create netns directory... Oct 14 05:54:04 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 14 05:54:04 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 14 05:54:04 localhost systemd[1]: Finished Create netns directory. Oct 14 05:54:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:54:05 localhost systemd[1]: tmp-crun.NqD0MF.mount: Deactivated successfully. Oct 14 05:54:05 localhost python3.9[282476]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:54:05 localhost podman[282477]: 2025-10-14 09:54:05.25413505 +0000 UTC m=+0.090742281 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:54:05 localhost podman[282477]: 2025-10-14 09:54:05.26819595 +0000 UTC m=+0.104803221 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:54:05 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:54:06 localhost python3.9[282609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:54:06 localhost sshd[282628]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:07 localhost python3.9[282668]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:54:08 localhost python3.9[282778]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:54:09 localhost sshd[282889]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:10 localhost python3.9[282888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:54:10 localhost python3.9[282947]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=._r8gckrs recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:11 localhost python3.9[283057]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:13 localhost sshd[283242]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:13 localhost python3.9[283336]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 14 05:54:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:54:14 localhost podman[283408]: 2025-10-14 09:54:14.55791163 +0000 UTC m=+0.093642471 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true) Oct 14 05:54:14 localhost podman[283408]: 2025-10-14 09:54:14.572034081 +0000 UTC m=+0.107764992 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0) Oct 14 05:54:14 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:54:14 localhost python3.9[283465]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:54:15 localhost python3.9[283575]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 14 05:54:16 localhost sshd[283620]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:54:19 localhost podman[283623]: 2025-10-14 09:54:19.544239113 +0000 UTC m=+0.079700894 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:54:19 localhost podman[283623]: 2025-10-14 09:54:19.55933324 +0000 UTC m=+0.094795051 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:54:19 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:54:19 localhost podman[283622]: 2025-10-14 09:54:19.523142463 +0000 UTC m=+0.065008767 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent) Oct 14 05:54:19 localhost podman[283622]: 2025-10-14 09:54:19.602119955 +0000 UTC m=+0.143986239 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 05:54:19 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:54:20 localhost sshd[283757]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:20 localhost python3[283756]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:54:20 localhost python3[283756]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0cc989a5ef996507b0f9d8ef7fc230c93fad4ad33debd19bbe24250b85566285",#012 "Digest": "sha256:7b5e7d0bff1c705215946e167be50eac031a93886d33e2e88e389776e8e13e70",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:7b5e7d0bff1c705215946e167be50eac031a93886d33e2e88e389776e8e13e70"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-14T06:10:30.956277521Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249351661,#012 "VirtualSize": 249351661,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/0b52816892c0967aea6a33893e73899adbf76e3ca055f6670535905d8ddf2b2c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/b229675e52e0150c8f53be2f60bdcd02e09cc9ac91e9d7513ccf836c4fc95815/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/b229675e52e0150c8f53be2f60bdcd02e09cc9ac91e9d7513ccf836c4fc95815/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2896905ce9321c1f2feb1f3ada413e86eda3444455358ab965478a041351b392",#012 "sha256:3be5c7cbc12431945afa672da84f6330a9da4cc765276b49a4ad90cf80ae26d7"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "0468cb21803d466b2abfe00835cf1d2d",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-14T06:08:54.969219151Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969253522Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969285133Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969308103Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969342284Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:54.969363945Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:08:55.340499198Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:32.389605838Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:35.587912811Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-14T06:09:35.976619634Z",#012 Oct 14 05:54:22 localhost python3.9[283930]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:54:23 localhost python3.9[284042]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:23 localhost sshd[284098]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:23 localhost python3.9[284097]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:54:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6707 DF PROTO=TCP SPT=59632 DPT=9102 SEQ=2879856997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762567CE0000000001030307) Oct 14 05:54:24 localhost python3.9[284209]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435663.8965127-2189-174892529554494/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6708 DF PROTO=TCP SPT=59632 DPT=9102 SEQ=2879856997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76256BE90000000001030307) Oct 14 05:54:25 localhost python3.9[284264]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:25 localhost python3.9[284374]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:54:26 localhost podman[284463]: 2025-10-14 09:54:26.546882361 +0000 UTC m=+0.082450015 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 05:54:26 localhost podman[284463]: 2025-10-14 09:54:26.591107658 +0000 UTC m=+0.126675302 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 05:54:26 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:54:26 localhost python3.9[284498]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6709 DF PROTO=TCP SPT=59632 DPT=9102 SEQ=2879856997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762573E90000000001030307) Oct 14 05:54:26 localhost sshd[284519]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:27 localhost python3.9[284613]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 14 05:54:28 localhost python3.9[284723]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 14 05:54:29 localhost python3.9[284833]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:54:30 localhost python3.9[284890]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:30 localhost sshd[284924]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:30 localhost podman[246584]: time="2025-10-14T09:54:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:54:30 localhost podman[246584]: @ - - [14/Oct/2025:09:54:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:54:30 localhost podman[246584]: @ - - [14/Oct/2025:09:54:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16862 "" "Go-http-client/1.1" Oct 14 05:54:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6710 DF PROTO=TCP SPT=59632 DPT=9102 SEQ=2879856997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762583A90000000001030307) Oct 14 05:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:54:31 localhost systemd[1]: tmp-crun.GXBcoJ.mount: Deactivated successfully. Oct 14 05:54:31 localhost podman[285002]: 2025-10-14 09:54:31.43844771 +0000 UTC m=+0.091107375 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2) Oct 14 05:54:31 localhost podman[285002]: 2025-10-14 09:54:31.476168974 +0000 UTC m=+0.128828679 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 05:54:31 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:54:31 localhost python3.9[285003]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:54:32 localhost podman[285130]: 2025-10-14 09:54:32.383360597 +0000 UTC m=+0.084444789 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:54:32 localhost podman[285130]: 2025-10-14 09:54:32.399099116 +0000 UTC m=+0.100183308 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.) Oct 14 05:54:32 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:54:32 localhost python3.9[285129]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 14 05:54:33 localhost openstack_network_exporter[248748]: ERROR 09:54:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:54:33 localhost openstack_network_exporter[248748]: ERROR 09:54:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:54:33 localhost openstack_network_exporter[248748]: ERROR 09:54:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:54:33 localhost openstack_network_exporter[248748]: ERROR 09:54:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:54:33 localhost openstack_network_exporter[248748]: Oct 14 05:54:33 localhost openstack_network_exporter[248748]: ERROR 09:54:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:54:33 localhost openstack_network_exporter[248748]: Oct 14 05:54:33 localhost sshd[285213]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:34 localhost python3.9[285212]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 14 05:54:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:54:34 localhost podman[285217]: 2025-10-14 09:54:34.535576234 +0000 UTC m=+0.081137870 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 05:54:34 localhost podman[285217]: 2025-10-14 09:54:34.612181783 +0000 UTC m=+0.157743419 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 05:54:34 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:54:35 localhost podman[285241]: 2025-10-14 09:54:35.541359701 +0000 UTC m=+0.084677024 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:54:35 localhost podman[285241]: 2025-10-14 09:54:35.552151328 +0000 UTC m=+0.095468641 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:54:35 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:54:37 localhost sshd[285281]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:38 localhost python3.9[285373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 14 05:54:39 localhost python3.9[285487]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:40 localhost sshd[285598]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:40 localhost python3.9[285597]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:54:40 localhost systemd[1]: Reloading. Oct 14 05:54:41 localhost systemd-sysv-generator[285626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:54:41 localhost systemd-rc-local-generator[285622]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:54:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:54:41 localhost python3.9[285744]: ansible-ansible.builtin.service_facts Invoked Oct 14 05:54:41 localhost network[285761]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 14 05:54:41 localhost network[285762]: 'network-scripts' will be removed from distribution in near future. Oct 14 05:54:41 localhost network[285763]: It is advised to switch to 'NetworkManager' instead for network management. Oct 14 05:54:44 localhost sshd[285878]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:54:44 localhost systemd[1]: tmp-crun.AH643I.mount: Deactivated successfully. Oct 14 05:54:44 localhost podman[285911]: 2025-10-14 09:54:44.718517863 +0000 UTC m=+0.095290807 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:54:44 localhost podman[285911]: 2025-10-14 09:54:44.755265941 +0000 UTC m=+0.132038845 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 05:54:44 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:54:47 localhost sshd[286065]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:48 localhost python3.9[286105]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:48 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=162.142.125.211 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=28886 DF PROTO=TCP SPT=43958 DPT=19885 SEQ=3085244460 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A0B3D3846000000000103030A) Oct 14 05:54:48 localhost python3.9[286216]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.186 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.186 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.186 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.668 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:54:49 localhost podman[286348]: 2025-10-14 09:54:49.673874199 +0000 UTC m=+0.062882505 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:54:49 localhost podman[286348]: 2025-10-14 09:54:49.679007216 +0000 UTC m=+0.068015552 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:54:49 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:54:49 localhost systemd[1]: tmp-crun.IgLbV8.mount: Deactivated successfully. Oct 14 05:54:49 localhost python3.9[286347]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:49 localhost podman[286365]: 2025-10-14 09:54:49.745900905 +0000 UTC m=+0.066907771 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:54:49 localhost podman[286365]: 2025-10-14 09:54:49.757278029 +0000 UTC m=+0.078284915 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:54:49 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.854 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.857 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12697MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.857 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.857 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:54:49 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=162.142.125.211 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=50857 DF PROTO=TCP SPT=43964 DPT=19885 SEQ=2319107409 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A0B3D3C64000000000103030A) Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.924 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.924 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:54:49 localhost nova_compute[236479]: 2025-10-14 09:54:49.954 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:54:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:54:50 localhost nova_compute[236479]: 2025-10-14 09:54:50.430 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:54:50 localhost nova_compute[236479]: 2025-10-14 09:54:50.436 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:54:50 localhost nova_compute[236479]: 2025-10-14 09:54:50.462 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:54:50 localhost nova_compute[236479]: 2025-10-14 09:54:50.465 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:54:50 localhost nova_compute[236479]: 2025-10-14 09:54:50.465 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:54:50 localhost python3.9[286519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:50 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=162.142.125.211 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=50858 DF PROTO=TCP SPT=43964 DPT=19885 SEQ=2319107409 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A0B3D4068000000000103030A) Oct 14 05:54:50 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=162.142.125.211 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=22919 DF PROTO=TCP SPT=43974 DPT=19885 SEQ=932856903 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A0B3D4079000000000103030A) Oct 14 05:54:50 localhost sshd[286615]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:51 localhost python3.9[286634]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:51 localhost nova_compute[236479]: 2025-10-14 09:54:51.467 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:51 localhost nova_compute[236479]: 2025-10-14 09:54:51.467 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:51 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c9:f0:cc MACPROTO=0800 SRC=162.142.125.211 DST=38.102.83.104 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=22920 DF PROTO=TCP SPT=43974 DPT=19885 SEQ=932856903 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A0B3D4469000000000103030A) Oct 14 05:54:52 localhost python3.9[286745]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:52 localhost nova_compute[236479]: 2025-10-14 09:54:52.160 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:52 localhost python3.9[286856]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15361 DF PROTO=TCP SPT=36390 DPT=9102 SEQ=3021607037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7625DCFE0000000001030307) Oct 14 05:54:54 localhost nova_compute[236479]: 2025-10-14 09:54:54.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:54 localhost sshd[286966]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:54 localhost python3.9[286969]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:54:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15362 DF PROTO=TCP SPT=36390 DPT=9102 SEQ=3021607037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7625E0EA0000000001030307) Oct 14 05:54:55 localhost nova_compute[236479]: 2025-10-14 09:54:55.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:56 localhost nova_compute[236479]: 2025-10-14 09:54:56.165 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:56 localhost nova_compute[236479]: 2025-10-14 09:54:56.165 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:54:56 localhost nova_compute[236479]: 2025-10-14 09:54:56.166 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:54:56 localhost nova_compute[236479]: 2025-10-14 09:54:56.185 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:54:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15363 DF PROTO=TCP SPT=36390 DPT=9102 SEQ=3021607037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7625E8E90000000001030307) Oct 14 05:54:56 localhost podman[286988]: 2025-10-14 09:54:56.902530744 +0000 UTC m=+0.085305011 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd) Oct 14 05:54:56 localhost podman[286988]: 2025-10-14 09:54:56.918192271 +0000 UTC m=+0.100966528 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:54:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:54:57 localhost nova_compute[236479]: 2025-10-14 09:54:57.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:57 localhost python3.9[287098]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:54:57.618 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:54:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:54:57.619 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:54:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:54:57.619 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:54:57 localhost sshd[287140]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:54:58 localhost python3.9[287210]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:58 localhost python3.9[287320]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:54:59 localhost nova_compute[236479]: 2025-10-14 09:54:59.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:54:59 localhost nova_compute[236479]: 2025-10-14 09:54:59.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:54:59 localhost python3.9[287430]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:00 localhost python3.9[287540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:00 localhost podman[246584]: time="2025-10-14T09:55:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:55:00 localhost podman[246584]: @ - - [14/Oct/2025:09:55:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:55:00 localhost podman[246584]: @ - - [14/Oct/2025:09:55:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16866 "" "Go-http-client/1.1" Oct 14 05:55:00 localhost python3.9[287650]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15364 DF PROTO=TCP SPT=36390 DPT=9102 SEQ=3021607037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7625F8A90000000001030307) Oct 14 05:55:01 localhost sshd[287760]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:01 localhost python3.9[287761]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:55:01 localhost systemd[1]: tmp-crun.5PCf6C.mount: Deactivated successfully. Oct 14 05:55:01 localhost podman[287873]: 2025-10-14 09:55:01.841431783 +0000 UTC m=+0.096960111 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_id=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:55:01 localhost podman[287873]: 2025-10-14 09:55:01.857105811 +0000 UTC m=+0.112634129 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:55:01 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:55:01 localhost python3.9[287872]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:55:02 localhost systemd[1]: tmp-crun.bQkDEY.mount: Deactivated successfully. Oct 14 05:55:02 localhost podman[287999]: 2025-10-14 09:55:02.553640657 +0000 UTC m=+0.091035373 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64) Oct 14 05:55:02 localhost podman[287999]: 2025-10-14 09:55:02.570186897 +0000 UTC m=+0.107581623 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 14 05:55:02 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:55:02 localhost python3.9[288005]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:03 localhost openstack_network_exporter[248748]: ERROR 09:55:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:55:03 localhost openstack_network_exporter[248748]: ERROR 09:55:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:55:03 localhost openstack_network_exporter[248748]: ERROR 09:55:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:55:03 localhost openstack_network_exporter[248748]: ERROR 09:55:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:55:03 localhost openstack_network_exporter[248748]: Oct 14 05:55:03 localhost openstack_network_exporter[248748]: ERROR 09:55:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:55:03 localhost openstack_network_exporter[248748]: Oct 14 05:55:03 localhost python3.9[288130]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:03 localhost python3.9[288240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:04 localhost python3.9[288350]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:04 localhost sshd[288364]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:55:04 localhost systemd[1]: tmp-crun.0bbZox.mount: Deactivated successfully. Oct 14 05:55:04 localhost podman[288370]: 2025-10-14 09:55:04.769316123 +0000 UTC m=+0.080444531 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 05:55:04 localhost podman[288370]: 2025-10-14 09:55:04.836199753 +0000 UTC m=+0.147328181 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 05:55:04 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:55:05 localhost python3.9[288485]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:55:05 localhost podman[288595]: 2025-10-14 09:55:05.727984167 +0000 UTC m=+0.075223603 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:55:05 localhost podman[288595]: 2025-10-14 09:55:05.764117497 +0000 UTC m=+0.111356973 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:55:05 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:55:05 localhost python3.9[288596]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:07 localhost python3.9[288727]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:07 localhost python3.9[288837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:08 localhost sshd[288893]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:09 localhost python3.9[288949]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:10 localhost python3.9[289059]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 14 05:55:11 localhost python3.9[289169]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 14 05:55:11 localhost systemd[1]: Reloading. Oct 14 05:55:11 localhost systemd-sysv-generator[289200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 05:55:11 localhost systemd-rc-local-generator[289196]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 05:55:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 05:55:11 localhost sshd[289206]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:12 localhost python3.9[289318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:13 localhost python3.9[289429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:14 localhost python3.9[289540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:14 localhost sshd[289597]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:55:15 localhost systemd[1]: tmp-crun.R69AtU.mount: Deactivated successfully. Oct 14 05:55:15 localhost podman[289651]: 2025-10-14 09:55:15.157977867 +0000 UTC m=+0.101039549 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:55:15 localhost podman[289651]: 2025-10-14 09:55:15.193303697 +0000 UTC m=+0.136365359 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Oct 14 05:55:15 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:55:15 localhost python3.9[289664]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:16 localhost python3.9[289783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:17 localhost python3.9[289894]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:18 localhost python3.9[290005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:18 localhost sshd[290025]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:18 localhost python3.9[290118]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:55:20 localhost podman[290137]: 2025-10-14 09:55:20.556253902 +0000 UTC m=+0.086029150 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:55:20 localhost podman[290137]: 2025-10-14 09:55:20.567147533 +0000 UTC m=+0.096922821 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 05:55:20 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:55:20 localhost podman[290138]: 2025-10-14 09:55:20.65459843 +0000 UTC m=+0.181441120 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:55:20 localhost podman[290138]: 2025-10-14 09:55:20.693206907 +0000 UTC m=+0.220049607 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:55:20 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:55:21 localhost python3.9[290271]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:21 localhost sshd[290382]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:21 localhost python3.9[290381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:23 localhost python3.9[290493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59976 DF PROTO=TCP SPT=48174 DPT=9102 SEQ=3742846111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7626522D0000000001030307) Oct 14 05:55:23 localhost python3.9[290603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:24 localhost python3.9[290713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59977 DF PROTO=TCP SPT=48174 DPT=9102 SEQ=3742846111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762656290000000001030307) Oct 14 05:55:25 localhost sshd[290824]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:25 localhost python3.9[290823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:25 localhost python3.9[290936]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:26 localhost python3.9[291046]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59978 DF PROTO=TCP SPT=48174 DPT=9102 SEQ=3742846111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76265E2A0000000001030307) Oct 14 05:55:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:55:27 localhost podman[291157]: 2025-10-14 09:55:27.078428787 +0000 UTC m=+0.081025528 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 05:55:27 localhost podman[291157]: 2025-10-14 09:55:27.090681552 +0000 UTC m=+0.093278263 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Oct 14 05:55:27 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:55:27 localhost python3.9[291156]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:27 localhost python3.9[291287]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:28 localhost python3.9[291397]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:28 localhost sshd[291402]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:29 localhost python3.9[291509]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:30 localhost podman[246584]: time="2025-10-14T09:55:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:55:30 localhost podman[246584]: @ - - [14/Oct/2025:09:55:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:55:30 localhost podman[246584]: @ - - [14/Oct/2025:09:55:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16860 "" "Go-http-client/1.1" Oct 14 05:55:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59979 DF PROTO=TCP SPT=48174 DPT=9102 SEQ=3742846111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76266DE90000000001030307) Oct 14 05:55:31 localhost sshd[291527]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:55:32 localhost podman[291529]: 2025-10-14 09:55:32.293777913 +0000 UTC m=+0.070107827 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3) Oct 14 05:55:32 localhost podman[291529]: 2025-10-14 09:55:32.332152383 +0000 UTC m=+0.108482287 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 14 05:55:32 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:55:33 localhost openstack_network_exporter[248748]: ERROR 09:55:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:55:33 localhost openstack_network_exporter[248748]: ERROR 09:55:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:55:33 localhost openstack_network_exporter[248748]: ERROR 09:55:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:55:33 localhost openstack_network_exporter[248748]: ERROR 09:55:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:55:33 localhost openstack_network_exporter[248748]: Oct 14 05:55:33 localhost openstack_network_exporter[248748]: ERROR 09:55:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:55:33 localhost openstack_network_exporter[248748]: Oct 14 05:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:55:33 localhost podman[291548]: 2025-10-14 09:55:33.52992566 +0000 UTC m=+0.075736447 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Oct 14 05:55:33 localhost podman[291548]: 2025-10-14 09:55:33.542265078 +0000 UTC m=+0.088075874 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Oct 14 05:55:33 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:55:35 localhost podman[291660]: 2025-10-14 09:55:35.046506742 +0000 UTC m=+0.079262550 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS) Oct 14 05:55:35 localhost podman[291660]: 2025-10-14 09:55:35.145182888 +0000 UTC m=+0.177938716 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 05:55:35 localhost python3.9[291659]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 14 05:55:35 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:55:35 localhost sshd[291704]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:36 localhost sshd[291707]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:55:36 localhost systemd-logind[760]: New session 63 of user zuul. Oct 14 05:55:36 localhost systemd[1]: Started Session 63 of User zuul. Oct 14 05:55:36 localhost podman[291709]: 2025-10-14 09:55:36.380254206 +0000 UTC m=+0.094227929 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:55:36 localhost podman[291709]: 2025-10-14 09:55:36.395214845 +0000 UTC m=+0.109188607 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:55:36 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:55:36 localhost systemd[1]: session-63.scope: Deactivated successfully. Oct 14 05:55:36 localhost systemd-logind[760]: Session 63 logged out. Waiting for processes to exit. Oct 14 05:55:36 localhost systemd-logind[760]: Removed session 63. Oct 14 05:55:37 localhost python3.9[291843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:37 localhost python3.9[291929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435736.7103312-3916-94016099602119/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:38 localhost python3.9[292037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:38 localhost python3.9[292092]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:38 localhost sshd[292110]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:39 localhost python3.9[292203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:39 localhost python3.9[292289]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435738.894724-3916-214942297112278/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:41 localhost python3.9[292397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:42 localhost python3.9[292483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435740.1048167-3916-271172764085686/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=80098a213e897ecefc50c1420f932ebe70b1fea3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:42 localhost sshd[292485]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:42 localhost python3.9[292593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:44 localhost python3.9[292715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760435742.330982-3916-175315776635115/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:55:45 localhost systemd[1]: tmp-crun.Pr41XW.mount: Deactivated successfully. Oct 14 05:55:45 localhost podman[292840]: 2025-10-14 09:55:45.385848234 +0000 UTC m=+0.095073441 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:55:45 localhost podman[292840]: 2025-10-14 09:55:45.42027736 +0000 UTC m=+0.129502577 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 05:55:45 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:55:45 localhost python3.9[292868]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:45 localhost sshd[292892]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:46 localhost python3.9[292986]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:47 localhost python3.9[293096]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:55:47 localhost python3.9[293226]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:55:48 localhost python3.9[293334]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:55:49 localhost sshd[293412]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.184 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.185 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.185 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.185 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.186 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:55:49 localhost python3.9[293446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.662 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.822 2 WARNING nova.virt.libvirt.driver [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.825 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12759MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.825 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.826 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:55:49 localhost python3.9[293522]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.918 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.919 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:55:49 localhost nova_compute[236479]: 2025-10-14 09:55:49.936 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:55:50 localhost nova_compute[236479]: 2025-10-14 09:55:50.400 2 DEBUG oslo_concurrency.processutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:55:50 localhost nova_compute[236479]: 2025-10-14 09:55:50.407 2 DEBUG nova.compute.provider_tree [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:55:50 localhost nova_compute[236479]: 2025-10-14 09:55:50.425 2 DEBUG nova.scheduler.client.report [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:55:50 localhost nova_compute[236479]: 2025-10-14 09:55:50.428 2 DEBUG nova.compute.resource_tracker [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:55:50 localhost nova_compute[236479]: 2025-10-14 09:55:50.428 2 DEBUG oslo_concurrency.lockutils [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:55:50 localhost python3.9[293652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 14 05:55:51 localhost python3.9[293709]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 14 05:55:51 localhost nova_compute[236479]: 2025-10-14 09:55:51.429 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:55:51 localhost systemd[1]: tmp-crun.XwLFu4.mount: Deactivated successfully. Oct 14 05:55:51 localhost podman[293727]: 2025-10-14 09:55:51.553252838 +0000 UTC m=+0.094107056 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 05:55:51 localhost podman[293727]: 2025-10-14 09:55:51.593325764 +0000 UTC m=+0.134180032 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:55:51 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:55:51 localhost podman[293728]: 2025-10-14 09:55:51.597865875 +0000 UTC m=+0.138174309 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:55:51 localhost podman[293728]: 2025-10-14 09:55:51.677558616 +0000 UTC m=+0.217867090 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:55:51 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:55:52 localhost python3.9[293860]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 14 05:55:52 localhost nova_compute[236479]: 2025-10-14 09:55:52.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:52 localhost sshd[293932]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:52 localhost python3.9[293972]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:55:53 localhost nova_compute[236479]: 2025-10-14 09:55:53.159 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27787 DF PROTO=TCP SPT=35124 DPT=9102 SEQ=1583199681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7626C75E0000000001030307) Oct 14 05:55:53 localhost python3[294083]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:55:54 localhost python3[294083]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b5b57d3572ac74b7c41332c066527d5039dbd47e134e43d7cb5d76b7732d99f5",#012 "Digest": "sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-13T12:50:19.385564198Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207014273,#012 "VirtualSize": 1207014273,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36/diff:/var/lib/containers/storage/overlay/0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861/diff:/var/lib/containers/storage/overlay/ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2c35d1af0a6e73cbcf6c04a576d2e6a150aeaa6ae9408c81b2003edd71d6ae59",#012 "sha256:3ad61591f8d467f7db4e096e1991f274fe1d4f8ad685b553dacb57c5e894eab0",#012 "sha256:e0ba9b00dd1340fa4eba9e9cd5f316c11381d47a31460e5b834a6ca56f60033f",#012 "sha256:731e9354c974a424a2f6724faa85f84baef270eb006be0de18bbdc87ff420f97"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-13T12:28:42.843286399Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843354051Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843394192Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843417133Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843442193Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843461914Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:43.236856724Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:17.539596691Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 14 05:55:54 localhost nova_compute[236479]: 2025-10-14 09:55:54.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27788 DF PROTO=TCP SPT=35124 DPT=9102 SEQ=1583199681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7626CB690000000001030307) Oct 14 05:55:54 localhost python3.9[294256]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:55:55 localhost nova_compute[236479]: 2025-10-14 09:55:55.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:55 localhost sshd[294351]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:55:56 localhost python3.9[294370]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 14 05:55:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27789 DF PROTO=TCP SPT=35124 DPT=9102 SEQ=1583199681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7626D3690000000001030307) Oct 14 05:55:56 localhost python3.9[294480]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 14 05:55:57 localhost nova_compute[236479]: 2025-10-14 09:55:57.164 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:57 localhost nova_compute[236479]: 2025-10-14 09:55:57.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:55:57 localhost nova_compute[236479]: 2025-10-14 09:55:57.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:55:57 localhost nova_compute[236479]: 2025-10-14 09:55:57.185 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:55:57 localhost nova_compute[236479]: 2025-10-14 09:55:57.185 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:55:57 localhost podman[294514]: 2025-10-14 09:55:57.555841814 +0000 UTC m=+0.094149406 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:55:57 localhost podman[294514]: 2025-10-14 09:55:57.565197793 +0000 UTC m=+0.103505435 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:55:57 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:55:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:55:57.620 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:55:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:55:57.620 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:55:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:55:57.620 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:55:58 localhost python3[294609]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 14 05:55:58 localhost python3[294609]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b5b57d3572ac74b7c41332c066527d5039dbd47e134e43d7cb5d76b7732d99f5",#012 "Digest": "sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:6cdce1b6b9f1175545fa217f885c1a3360bebe7d9975584481a6ff221f3ad48f"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-13T12:50:19.385564198Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207014273,#012 "VirtualSize": 1207014273,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/512b226761ef17c0044cb14b83718aa6f9984afb51b1aeb63112d22d2fdccb36/diff:/var/lib/containers/storage/overlay/0accaf46e2ca98f20a95b21cea4fb623de0e5378cb14b163bca0a8771d84c861/diff:/var/lib/containers/storage/overlay/ab64777085904da680013c790c3f2c65f0b954578737ec4d7fa836f56655c34a/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5ce6c5d0cc60f856680938093014249abcf9a107a94355720d953b1d1e7f1bfe/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",#012 "sha256:2c35d1af0a6e73cbcf6c04a576d2e6a150aeaa6ae9408c81b2003edd71d6ae59",#012 "sha256:3ad61591f8d467f7db4e096e1991f274fe1d4f8ad685b553dacb57c5e894eab0",#012 "sha256:e0ba9b00dd1340fa4eba9e9cd5f316c11381d47a31460e5b834a6ca56f60033f",#012 "sha256:731e9354c974a424a2f6724faa85f84baef270eb006be0de18bbdc87ff420f97"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251009",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1e4eeec18f8da2b364b39b7a7358aef5",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-09T00:18:03.867908726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:03.868015697Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251009\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-09T00:18:07.890794359Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-13T12:28:42.843286399Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843354051Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843394192Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843417133Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843442193Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:42.843461914Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:28:43.236856724Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-13T12:29:17.539596691Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 14 05:55:59 localhost python3.9[294780]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:55:59 localhost nova_compute[236479]: 2025-10-14 09:55:59.163 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:55:59 localhost nova_compute[236479]: 2025-10-14 09:55:59.164 2 DEBUG nova.compute.manager [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:55:59 localhost sshd[294800]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:00 localhost python3.9[294894]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:56:00 localhost podman[246584]: time="2025-10-14T09:56:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:56:00 localhost podman[246584]: @ - - [14/Oct/2025:09:56:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:56:00 localhost podman[246584]: @ - - [14/Oct/2025:09:56:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16865 "" "Go-http-client/1.1" Oct 14 05:56:00 localhost python3.9[295003]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760435760.151537-4552-122287142998554/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:56:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27790 DF PROTO=TCP SPT=35124 DPT=9102 SEQ=1583199681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7626E3290000000001030307) Oct 14 05:56:01 localhost nova_compute[236479]: 2025-10-14 09:56:01.159 2 DEBUG oslo_service.periodic_task [None req-415d75c9-b2be-4d73-9f42-57b28b4f254e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:56:01 localhost python3.9[295058]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 14 05:56:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:56:02 localhost podman[295060]: 2025-10-14 09:56:02.547508668 +0000 UTC m=+0.087047148 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:56:02 localhost podman[295060]: 2025-10-14 09:56:02.579265483 +0000 UTC m=+0.118803923 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:56:02 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:56:02 localhost sshd[295096]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:03 localhost openstack_network_exporter[248748]: ERROR 09:56:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:56:03 localhost openstack_network_exporter[248748]: ERROR 09:56:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:56:03 localhost openstack_network_exporter[248748]: ERROR 09:56:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:56:03 localhost openstack_network_exporter[248748]: ERROR 09:56:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:56:03 localhost openstack_network_exporter[248748]: Oct 14 05:56:03 localhost openstack_network_exporter[248748]: ERROR 09:56:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:56:03 localhost openstack_network_exporter[248748]: Oct 14 05:56:03 localhost python3.9[295188]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:56:04 localhost podman[295297]: 2025-10-14 09:56:04.54289448 +0000 UTC m=+0.084101170 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Oct 14 05:56:04 localhost podman[295297]: 2025-10-14 09:56:04.555402334 +0000 UTC m=+0.096609033 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, architecture=x86_64) Oct 14 05:56:04 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:56:04 localhost python3.9[295296]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:56:05 localhost python3.9[295424]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 14 05:56:05 localhost podman[295425]: 2025-10-14 09:56:05.539151734 +0000 UTC m=+0.079763613 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 05:56:05 localhost podman[295425]: 2025-10-14 09:56:05.607122233 +0000 UTC m=+0.147734102 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller) Oct 14 05:56:05 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:56:06 localhost sshd[295505]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:56:06 localhost podman[295507]: 2025-10-14 09:56:06.537950596 +0000 UTC m=+0.079924959 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:56:06 localhost podman[295507]: 2025-10-14 09:56:06.549151634 +0000 UTC m=+0.091126067 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:56:06 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:56:07 localhost python3.9[295586]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 14 05:56:07 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 105.1 (350 of 333 items), suggesting rotation. Oct 14 05:56:07 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:56:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:56:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:56:08 localhost python3.9[295720]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:56:08 localhost systemd[1]: Stopping nova_compute container... Oct 14 05:56:08 localhost systemd[1]: tmp-crun.Md44eR.mount: Deactivated successfully. Oct 14 05:56:09 localhost sshd[295738]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:10 localhost nova_compute[236479]: 2025-10-14 09:56:10.221 2 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Oct 14 05:56:10 localhost nova_compute[236479]: 2025-10-14 09:56:10.224 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:56:10 localhost nova_compute[236479]: 2025-10-14 09:56:10.224 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:56:10 localhost nova_compute[236479]: 2025-10-14 09:56:10.225 2 DEBUG oslo_concurrency.lockutils [None req-f1474482-75b6-4e7d-b2a0-01b77e35c867 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:56:10 localhost journal[235816]: End of file while reading data: Input/output error Oct 14 05:56:10 localhost systemd[1]: libpod-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89.scope: Deactivated successfully. Oct 14 05:56:10 localhost systemd[1]: libpod-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89.scope: Consumed 19.930s CPU time. Oct 14 05:56:10 localhost podman[295724]: 2025-10-14 09:56:10.581365961 +0000 UTC m=+1.927743452 container died 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251009) Oct 14 05:56:10 localhost systemd[1]: tmp-crun.Og0ARj.mount: Deactivated successfully. Oct 14 05:56:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89-userdata-shm.mount: Deactivated successfully. Oct 14 05:56:10 localhost systemd[1]: var-lib-containers-storage-overlay-0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866-merged.mount: Deactivated successfully. Oct 14 05:56:10 localhost podman[295724]: 2025-10-14 09:56:10.784891308 +0000 UTC m=+2.131268739 container cleanup 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:56:10 localhost podman[295724]: nova_compute Oct 14 05:56:10 localhost podman[295752]: 2025-10-14 09:56:10.88304585 +0000 UTC m=+0.063531242 container cleanup 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:56:10 localhost podman[295752]: nova_compute Oct 14 05:56:10 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 14 05:56:10 localhost systemd[1]: Stopped nova_compute container. Oct 14 05:56:10 localhost systemd[1]: Starting nova_compute container... Oct 14 05:56:11 localhost systemd[1]: Started libcrun container. Oct 14 05:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4e9fed705bcabcf81a6a1cc24eb30c25f469f16a1594c8a0a8fd51317d9866/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:11 localhost podman[295763]: 2025-10-14 09:56:11.046471879 +0000 UTC m=+0.132029355 container init 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:56:11 localhost podman[295763]: 2025-10-14 09:56:11.056829045 +0000 UTC m=+0.142386551 container start 1febac3e936ee8473c924a1d3acad0f60c59b043468a025d97d7b016ab638e89 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 05:56:11 localhost podman[295763]: nova_compute Oct 14 05:56:11 localhost nova_compute[295778]: + sudo -E kolla_set_configs Oct 14 05:56:11 localhost systemd[1]: Started nova_compute container. Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Validating config file Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying service configuration files Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /etc/ceph Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Creating directory /etc/ceph Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/ceph Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Writing out command to execute Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:56:11 localhost nova_compute[295778]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 14 05:56:11 localhost nova_compute[295778]: ++ cat /run_command Oct 14 05:56:11 localhost nova_compute[295778]: + CMD=nova-compute Oct 14 05:56:11 localhost nova_compute[295778]: + ARGS= Oct 14 05:56:11 localhost nova_compute[295778]: + sudo kolla_copy_cacerts Oct 14 05:56:11 localhost nova_compute[295778]: + [[ ! -n '' ]] Oct 14 05:56:11 localhost nova_compute[295778]: + . kolla_extend_start Oct 14 05:56:11 localhost nova_compute[295778]: Running command: 'nova-compute' Oct 14 05:56:11 localhost nova_compute[295778]: + echo 'Running command: '\''nova-compute'\''' Oct 14 05:56:11 localhost nova_compute[295778]: + umask 0022 Oct 14 05:56:11 localhost nova_compute[295778]: + exec nova-compute Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.763 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.763 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.763 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.764 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.874 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:56:12 localhost nova_compute[295778]: 2025-10-14 09:56:12.895 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:56:12 localhost sshd[295812]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.368 2 INFO nova.virt.driver [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.482 2 INFO nova.compute.provider_config [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.488 2 DEBUG oslo_concurrency.lockutils [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_concurrency.lockutils [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_concurrency.lockutils [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.489 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.490 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.491 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] console_host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.492 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.493 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] host = np0005486731.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.494 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.495 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.496 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.497 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.498 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.499 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.500 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.501 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.502 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.503 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.504 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.505 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.506 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.507 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.508 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.509 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.510 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.511 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.512 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.513 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.514 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.515 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.516 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.517 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.518 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.519 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.520 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.521 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.522 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.523 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.524 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.525 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.526 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.527 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.528 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.529 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.530 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.531 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.532 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.533 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.534 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.535 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.536 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.537 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.538 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.539 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.540 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.541 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.542 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.543 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.544 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.545 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.546 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.547 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.548 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.549 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.550 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.551 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.552 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.553 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 WARNING oslo_config.cfg [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 14 05:56:13 localhost nova_compute[295778]: live_migration_uri is deprecated for removal in favor of two other options that Oct 14 05:56:13 localhost nova_compute[295778]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 14 05:56:13 localhost nova_compute[295778]: and ``live_migration_inbound_addr`` respectively. Oct 14 05:56:13 localhost nova_compute[295778]: ). Its value may be silently ignored in the future.#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.554 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.555 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.556 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rbd_secret_uuid = fcadf6e2-9176-5818-a8d0-37b19acf8eaf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.557 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.558 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.558 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.558 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.558 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.559 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.559 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.559 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.559 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.560 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.561 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.562 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.563 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.564 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.565 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.566 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.567 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.568 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.569 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.570 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.571 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.572 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.573 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.574 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.575 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.576 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.577 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.578 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.579 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.580 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.581 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.582 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.583 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.584 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.585 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.586 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.587 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.588 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.589 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.590 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.591 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.592 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.593 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.594 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.595 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.596 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.597 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.598 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.599 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.600 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.601 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.602 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.603 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.604 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.605 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.606 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.607 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.608 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.609 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.610 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.611 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.611 2 DEBUG oslo_service.service [None req-96756b67-5014-4ee6-b028-80d0bd3459c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.611 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.625 2 INFO nova.virt.node [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.626 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.626 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.626 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.627 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.637 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.639 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.640 2 INFO nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Connection event '1' reason 'None'#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.646 2 INFO nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Libvirt host capabilities Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: adf6dc17-eeaa-420b-a893-ea8f9e53b331 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: x86_64 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v4 Oct 14 05:56:13 localhost nova_compute[295778]: AMD Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: tcp Oct 14 05:56:13 localhost nova_compute[295778]: rdma Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 16116612 Oct 14 05:56:13 localhost nova_compute[295778]: 4029153 Oct 14 05:56:13 localhost nova_compute[295778]: 0 Oct 14 05:56:13 localhost nova_compute[295778]: 0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: selinux Oct 14 05:56:13 localhost nova_compute[295778]: 0 Oct 14 05:56:13 localhost nova_compute[295778]: system_u:system_r:svirt_t:s0 Oct 14 05:56:13 localhost nova_compute[295778]: system_u:system_r:svirt_tcg_t:s0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: dac Oct 14 05:56:13 localhost nova_compute[295778]: 0 Oct 14 05:56:13 localhost nova_compute[295778]: +107:+107 Oct 14 05:56:13 localhost nova_compute[295778]: +107:+107 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: hvm Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 32 Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-i440fx-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: q35 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.4.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.5.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.3.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.4.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.2.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.2.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.0.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.0.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.1.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: hvm Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 64 Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-i440fx-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: q35 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.4.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.5.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.3.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.4.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.2.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.2.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.0.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.0.0 Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel8.1.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: #033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.652 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.655 2 DEBUG nova.virt.libvirt.volume.mount [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.657 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: i686 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: rom Oct 14 05:56:13 localhost nova_compute[295778]: pflash Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: yes Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: AMD Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 486 Oct 14 05:56:13 localhost nova_compute[295778]: 486-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Conroe Oct 14 05:56:13 localhost nova_compute[295778]: Conroe-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-IBPB Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v4 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v1 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v2 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v6 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v7 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Penryn Oct 14 05:56:13 localhost nova_compute[295778]: Penryn-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Westmere Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v2 Oct 14 05:56:13 localhost nova_compute[295778]: athlon Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: athlon-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: kvm32 Oct 14 05:56:13 localhost nova_compute[295778]: kvm32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: n270 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: n270-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pentium Oct 14 05:56:13 localhost nova_compute[295778]: pentium-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: phenom Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: phenom-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu32 Oct 14 05:56:13 localhost nova_compute[295778]: qemu32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: file Oct 14 05:56:13 localhost nova_compute[295778]: anonymous Oct 14 05:56:13 localhost nova_compute[295778]: memfd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: disk Oct 14 05:56:13 localhost nova_compute[295778]: cdrom Oct 14 05:56:13 localhost nova_compute[295778]: floppy Oct 14 05:56:13 localhost nova_compute[295778]: lun Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: fdc Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: sata Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: vnc Oct 14 05:56:13 localhost nova_compute[295778]: egl-headless Oct 14 05:56:13 localhost nova_compute[295778]: dbus Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: subsystem Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: mandatory Oct 14 05:56:13 localhost nova_compute[295778]: requisite Oct 14 05:56:13 localhost nova_compute[295778]: optional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: pci Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: random Oct 14 05:56:13 localhost nova_compute[295778]: egd Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: path Oct 14 05:56:13 localhost nova_compute[295778]: handle Oct 14 05:56:13 localhost nova_compute[295778]: virtiofs Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: tpm-tis Oct 14 05:56:13 localhost nova_compute[295778]: tpm-crb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: emulator Oct 14 05:56:13 localhost nova_compute[295778]: external Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 2.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pty Oct 14 05:56:13 localhost nova_compute[295778]: unix Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: passt Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: isa Oct 14 05:56:13 localhost nova_compute[295778]: hyperv Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: relaxed Oct 14 05:56:13 localhost nova_compute[295778]: vapic Oct 14 05:56:13 localhost nova_compute[295778]: spinlocks Oct 14 05:56:13 localhost nova_compute[295778]: vpindex Oct 14 05:56:13 localhost nova_compute[295778]: runtime Oct 14 05:56:13 localhost nova_compute[295778]: synic Oct 14 05:56:13 localhost nova_compute[295778]: stimer Oct 14 05:56:13 localhost nova_compute[295778]: reset Oct 14 05:56:13 localhost nova_compute[295778]: vendor_id Oct 14 05:56:13 localhost nova_compute[295778]: frequencies Oct 14 05:56:13 localhost nova_compute[295778]: reenlightenment Oct 14 05:56:13 localhost nova_compute[295778]: tlbflush Oct 14 05:56:13 localhost nova_compute[295778]: ipi Oct 14 05:56:13 localhost nova_compute[295778]: avic Oct 14 05:56:13 localhost nova_compute[295778]: emsr_bitmap Oct 14 05:56:13 localhost nova_compute[295778]: xmm_input Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.662 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-i440fx-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: i686 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: rom Oct 14 05:56:13 localhost nova_compute[295778]: pflash Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: yes Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: AMD Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 486 Oct 14 05:56:13 localhost nova_compute[295778]: 486-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Conroe Oct 14 05:56:13 localhost nova_compute[295778]: Conroe-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-IBPB Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v4 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v1 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v2 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v6 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v7 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Penryn Oct 14 05:56:13 localhost nova_compute[295778]: Penryn-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Westmere Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v2 Oct 14 05:56:13 localhost nova_compute[295778]: athlon Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: athlon-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: kvm32 Oct 14 05:56:13 localhost nova_compute[295778]: kvm32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: n270 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: n270-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pentium Oct 14 05:56:13 localhost nova_compute[295778]: pentium-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: phenom Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: phenom-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu32 Oct 14 05:56:13 localhost nova_compute[295778]: qemu32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: file Oct 14 05:56:13 localhost nova_compute[295778]: anonymous Oct 14 05:56:13 localhost nova_compute[295778]: memfd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: disk Oct 14 05:56:13 localhost nova_compute[295778]: cdrom Oct 14 05:56:13 localhost nova_compute[295778]: floppy Oct 14 05:56:13 localhost nova_compute[295778]: lun Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: ide Oct 14 05:56:13 localhost nova_compute[295778]: fdc Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: sata Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: vnc Oct 14 05:56:13 localhost nova_compute[295778]: egl-headless Oct 14 05:56:13 localhost nova_compute[295778]: dbus Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: subsystem Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: mandatory Oct 14 05:56:13 localhost nova_compute[295778]: requisite Oct 14 05:56:13 localhost nova_compute[295778]: optional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: pci Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: random Oct 14 05:56:13 localhost nova_compute[295778]: egd Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: path Oct 14 05:56:13 localhost nova_compute[295778]: handle Oct 14 05:56:13 localhost nova_compute[295778]: virtiofs Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: tpm-tis Oct 14 05:56:13 localhost nova_compute[295778]: tpm-crb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: emulator Oct 14 05:56:13 localhost nova_compute[295778]: external Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 2.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pty Oct 14 05:56:13 localhost nova_compute[295778]: unix Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: passt Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: isa Oct 14 05:56:13 localhost nova_compute[295778]: hyperv Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: relaxed Oct 14 05:56:13 localhost nova_compute[295778]: vapic Oct 14 05:56:13 localhost nova_compute[295778]: spinlocks Oct 14 05:56:13 localhost nova_compute[295778]: vpindex Oct 14 05:56:13 localhost nova_compute[295778]: runtime Oct 14 05:56:13 localhost nova_compute[295778]: synic Oct 14 05:56:13 localhost nova_compute[295778]: stimer Oct 14 05:56:13 localhost nova_compute[295778]: reset Oct 14 05:56:13 localhost nova_compute[295778]: vendor_id Oct 14 05:56:13 localhost nova_compute[295778]: frequencies Oct 14 05:56:13 localhost nova_compute[295778]: reenlightenment Oct 14 05:56:13 localhost nova_compute[295778]: tlbflush Oct 14 05:56:13 localhost nova_compute[295778]: ipi Oct 14 05:56:13 localhost nova_compute[295778]: avic Oct 14 05:56:13 localhost nova_compute[295778]: emsr_bitmap Oct 14 05:56:13 localhost nova_compute[295778]: xmm_input Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.695 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.699 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-q35-rhel9.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: x86_64 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: efi Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: rom Oct 14 05:56:13 localhost nova_compute[295778]: pflash Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: yes Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: yes Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: AMD Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 486 Oct 14 05:56:13 localhost nova_compute[295778]: 486-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Conroe Oct 14 05:56:13 localhost nova_compute[295778]: Conroe-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-IBPB Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v4 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v1 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v2 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v6 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v7 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Penryn Oct 14 05:56:13 localhost nova_compute[295778]: Penryn-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Westmere Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v2 Oct 14 05:56:13 localhost nova_compute[295778]: athlon Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: athlon-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: kvm32 Oct 14 05:56:13 localhost nova_compute[295778]: kvm32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: n270 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: n270-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pentium Oct 14 05:56:13 localhost nova_compute[295778]: pentium-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: phenom Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: phenom-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu32 Oct 14 05:56:13 localhost nova_compute[295778]: qemu32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: file Oct 14 05:56:13 localhost nova_compute[295778]: anonymous Oct 14 05:56:13 localhost nova_compute[295778]: memfd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: disk Oct 14 05:56:13 localhost nova_compute[295778]: cdrom Oct 14 05:56:13 localhost nova_compute[295778]: floppy Oct 14 05:56:13 localhost nova_compute[295778]: lun Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: fdc Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: sata Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: vnc Oct 14 05:56:13 localhost nova_compute[295778]: egl-headless Oct 14 05:56:13 localhost nova_compute[295778]: dbus Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: subsystem Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: mandatory Oct 14 05:56:13 localhost nova_compute[295778]: requisite Oct 14 05:56:13 localhost nova_compute[295778]: optional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: pci Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: random Oct 14 05:56:13 localhost nova_compute[295778]: egd Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: path Oct 14 05:56:13 localhost nova_compute[295778]: handle Oct 14 05:56:13 localhost nova_compute[295778]: virtiofs Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: tpm-tis Oct 14 05:56:13 localhost nova_compute[295778]: tpm-crb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: emulator Oct 14 05:56:13 localhost nova_compute[295778]: external Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 2.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pty Oct 14 05:56:13 localhost nova_compute[295778]: unix Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: passt Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: isa Oct 14 05:56:13 localhost nova_compute[295778]: hyperv Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: relaxed Oct 14 05:56:13 localhost nova_compute[295778]: vapic Oct 14 05:56:13 localhost nova_compute[295778]: spinlocks Oct 14 05:56:13 localhost nova_compute[295778]: vpindex Oct 14 05:56:13 localhost nova_compute[295778]: runtime Oct 14 05:56:13 localhost nova_compute[295778]: synic Oct 14 05:56:13 localhost nova_compute[295778]: stimer Oct 14 05:56:13 localhost nova_compute[295778]: reset Oct 14 05:56:13 localhost nova_compute[295778]: vendor_id Oct 14 05:56:13 localhost nova_compute[295778]: frequencies Oct 14 05:56:13 localhost nova_compute[295778]: reenlightenment Oct 14 05:56:13 localhost nova_compute[295778]: tlbflush Oct 14 05:56:13 localhost nova_compute[295778]: ipi Oct 14 05:56:13 localhost nova_compute[295778]: avic Oct 14 05:56:13 localhost nova_compute[295778]: emsr_bitmap Oct 14 05:56:13 localhost nova_compute[295778]: xmm_input Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.747 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/libexec/qemu-kvm Oct 14 05:56:13 localhost nova_compute[295778]: kvm Oct 14 05:56:13 localhost nova_compute[295778]: pc-i440fx-rhel7.6.0 Oct 14 05:56:13 localhost nova_compute[295778]: x86_64 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: rom Oct 14 05:56:13 localhost nova_compute[295778]: pflash Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: yes Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: no Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: on Oct 14 05:56:13 localhost nova_compute[295778]: off Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: AMD Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 486 Oct 14 05:56:13 localhost nova_compute[295778]: 486-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Broadwell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cascadelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Conroe Oct 14 05:56:13 localhost nova_compute[295778]: Conroe-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Cooperlake-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Denverton-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Dhyana-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Genoa-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-IBPB Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Milan-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-Rome-v4 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v1 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v2 Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: EPYC-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: GraniteRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Haswell-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-noTSX Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v6 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Icelake-Server-v7 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: IvyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: KnightsMill-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Nehalem-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G1-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G4-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Opteron_G5-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Penryn Oct 14 05:56:13 localhost nova_compute[295778]: Penryn-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: SandyBridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SapphireRapids-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: SierraForest-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Client-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-noTSX-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Skylake-Server-v5 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v2 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v3 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Snowridge-v4 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Westmere Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-IBRS Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Westmere-v2 Oct 14 05:56:13 localhost nova_compute[295778]: athlon Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: athlon-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: core2duo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: coreduo-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: kvm32 Oct 14 05:56:13 localhost nova_compute[295778]: kvm32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64 Oct 14 05:56:13 localhost nova_compute[295778]: kvm64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: n270 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: n270-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pentium Oct 14 05:56:13 localhost nova_compute[295778]: pentium-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2 Oct 14 05:56:13 localhost nova_compute[295778]: pentium2-v1 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3 Oct 14 05:56:13 localhost nova_compute[295778]: pentium3-v1 Oct 14 05:56:13 localhost nova_compute[295778]: phenom Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: phenom-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu32 Oct 14 05:56:13 localhost nova_compute[295778]: qemu32-v1 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64 Oct 14 05:56:13 localhost nova_compute[295778]: qemu64-v1 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: file Oct 14 05:56:13 localhost nova_compute[295778]: anonymous Oct 14 05:56:13 localhost nova_compute[295778]: memfd Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: disk Oct 14 05:56:13 localhost nova_compute[295778]: cdrom Oct 14 05:56:13 localhost nova_compute[295778]: floppy Oct 14 05:56:13 localhost nova_compute[295778]: lun Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: ide Oct 14 05:56:13 localhost nova_compute[295778]: fdc Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: sata Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: vnc Oct 14 05:56:13 localhost nova_compute[295778]: egl-headless Oct 14 05:56:13 localhost nova_compute[295778]: dbus Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: subsystem Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: mandatory Oct 14 05:56:13 localhost nova_compute[295778]: requisite Oct 14 05:56:13 localhost nova_compute[295778]: optional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: pci Oct 14 05:56:13 localhost nova_compute[295778]: scsi Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: virtio Oct 14 05:56:13 localhost nova_compute[295778]: virtio-transitional Oct 14 05:56:13 localhost nova_compute[295778]: virtio-non-transitional Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: random Oct 14 05:56:13 localhost nova_compute[295778]: egd Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: path Oct 14 05:56:13 localhost nova_compute[295778]: handle Oct 14 05:56:13 localhost nova_compute[295778]: virtiofs Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: tpm-tis Oct 14 05:56:13 localhost nova_compute[295778]: tpm-crb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: emulator Oct 14 05:56:13 localhost nova_compute[295778]: external Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: 2.0 Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: usb Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: pty Oct 14 05:56:13 localhost nova_compute[295778]: unix Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: qemu Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: builtin Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: default Oct 14 05:56:13 localhost nova_compute[295778]: passt Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: isa Oct 14 05:56:13 localhost nova_compute[295778]: hyperv Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: relaxed Oct 14 05:56:13 localhost nova_compute[295778]: vapic Oct 14 05:56:13 localhost nova_compute[295778]: spinlocks Oct 14 05:56:13 localhost nova_compute[295778]: vpindex Oct 14 05:56:13 localhost nova_compute[295778]: runtime Oct 14 05:56:13 localhost nova_compute[295778]: synic Oct 14 05:56:13 localhost nova_compute[295778]: stimer Oct 14 05:56:13 localhost nova_compute[295778]: reset Oct 14 05:56:13 localhost nova_compute[295778]: vendor_id Oct 14 05:56:13 localhost nova_compute[295778]: frequencies Oct 14 05:56:13 localhost nova_compute[295778]: reenlightenment Oct 14 05:56:13 localhost nova_compute[295778]: tlbflush Oct 14 05:56:13 localhost nova_compute[295778]: ipi Oct 14 05:56:13 localhost nova_compute[295778]: avic Oct 14 05:56:13 localhost nova_compute[295778]: emsr_bitmap Oct 14 05:56:13 localhost nova_compute[295778]: xmm_input Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: Oct 14 05:56:13 localhost nova_compute[295778]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.794 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.795 2 INFO nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Secure Boot support detected#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.797 2 INFO nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.797 2 INFO nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.808 2 DEBUG nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.831 2 INFO nova.virt.node [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Determined node identity ebb6de71-88e5-4477-92fc-f2b9532f7fcd from /var/lib/nova/compute_id#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.848 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Verified node ebb6de71-88e5-4477-92fc-f2b9532f7fcd matches my host np0005486731.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 14 05:56:13 localhost nova_compute[295778]: 2025-10-14 09:56:13.872 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.151 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.152 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.152 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.152 2 DEBUG nova.compute.resource_tracker [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.153 2 DEBUG oslo_concurrency.processutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:56:14 localhost python3.9[295928]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.653 2 DEBUG oslo_concurrency.processutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:56:14 localhost systemd[1]: Started libpod-conmon-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope. Oct 14 05:56:14 localhost systemd[1]: Started libcrun container. Oct 14 05:56:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 14 05:56:14 localhost podman[295973]: 2025-10-14 09:56:14.728937742 +0000 UTC m=+0.142616306 container init 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true) Oct 14 05:56:14 localhost podman[295973]: 2025-10-14 09:56:14.738004883 +0000 UTC m=+0.151683407 container start 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:56:14 localhost python3.9[295928]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Applying nova statedir ownership Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/9469aff02825a9e3dcdb3ceeb358f8d540dc07c8b6e9cd975f170399051d29c3 Oct 14 05:56:14 localhost nova_compute_init[295995]: INFO:nova_statedir:Nova statedir ownership complete Oct 14 05:56:14 localhost systemd[1]: libpod-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope: Deactivated successfully. Oct 14 05:56:14 localhost podman[295996]: 2025-10-14 09:56:14.823210521 +0000 UTC m=+0.063760958 container died 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:56:14 localhost podman[296006]: 2025-10-14 09:56:14.915696312 +0000 UTC m=+0.129123377 container cleanup 8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:56:14 localhost systemd[1]: libpod-conmon-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20.scope: Deactivated successfully. Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.925 2 WARNING nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.928 2 DEBUG nova.compute.resource_tracker [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12734MB free_disk=41.83725357055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.929 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:56:14 localhost nova_compute[295778]: 2025-10-14 09:56:14.929 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.301 2 DEBUG nova.compute.resource_tracker [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.302 2 DEBUG nova.compute.resource_tracker [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.319 2 DEBUG nova.scheduler.client.report [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.494 2 DEBUG nova.scheduler.client.report [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.495 2 DEBUG nova.compute.provider_tree [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.514 2 DEBUG nova.scheduler.client.report [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.535 2 DEBUG nova.scheduler.client.report [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 05:56:15 localhost nova_compute[295778]: 2025-10-14 09:56:15.556 2 DEBUG oslo_concurrency.processutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:56:15 localhost systemd[1]: var-lib-containers-storage-overlay-02bcd85d32816a5c77f760cc28cb040664c934fb0262fceda2dd57dc4aec8f01-merged.mount: Deactivated successfully. Oct 14 05:56:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c8f0eb4c07c541b46e09b9b7ca49ce557180cf9e6422b964e524989a0e91c20-userdata-shm.mount: Deactivated successfully. Oct 14 05:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:56:15 localhost systemd[1]: session-61.scope: Deactivated successfully. Oct 14 05:56:15 localhost systemd[1]: session-61.scope: Consumed 1min 48.701s CPU time. Oct 14 05:56:15 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Oct 14 05:56:15 localhost systemd-logind[760]: Removed session 61. Oct 14 05:56:15 localhost podman[296054]: 2025-10-14 09:56:15.768248452 +0000 UTC m=+0.092619656 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:56:15 localhost podman[296054]: 2025-10-14 09:56:15.777615901 +0000 UTC m=+0.101987105 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2) Oct 14 05:56:15 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.093 2 DEBUG oslo_concurrency.processutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.100 2 DEBUG nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 14 05:56:16 localhost nova_compute[295778]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.100 2 INFO nova.virt.libvirt.host [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.101 2 DEBUG nova.compute.provider_tree [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.102 2 DEBUG nova.virt.libvirt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.129 2 DEBUG nova.scheduler.client.report [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.161 2 DEBUG nova.compute.resource_tracker [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.161 2 DEBUG oslo_concurrency.lockutils [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.162 2 DEBUG nova.service [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.188 2 DEBUG nova.service [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 14 05:56:16 localhost nova_compute[295778]: 2025-10-14 09:56:16.189 2 DEBUG nova.servicegroup.drivers.db [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] DB_Driver: join new ServiceGroup member np0005486731.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 14 05:56:16 localhost sshd[296095]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:19 localhost sshd[296097]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:56:22 localhost podman[296100]: 2025-10-14 09:56:22.546877762 +0000 UTC m=+0.086240266 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 05:56:22 localhost podman[296100]: 2025-10-14 09:56:22.552894002 +0000 UTC m=+0.092256526 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:56:22 localhost podman[296101]: 2025-10-14 09:56:22.592899927 +0000 UTC m=+0.129516457 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:56:22 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:56:22 localhost podman[296101]: 2025-10-14 09:56:22.658521073 +0000 UTC m=+0.195137583 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:56:22 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:56:23 localhost sshd[296140]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57276 DF PROTO=TCP SPT=42066 DPT=9102 SEQ=3718995378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76273C8D0000000001030307) Oct 14 05:56:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57277 DF PROTO=TCP SPT=42066 DPT=9102 SEQ=3718995378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762740AA0000000001030307) Oct 14 05:56:26 localhost sshd[296142]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57278 DF PROTO=TCP SPT=42066 DPT=9102 SEQ=3718995378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762748A90000000001030307) Oct 14 05:56:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:56:28 localhost podman[296144]: 2025-10-14 09:56:28.565888067 +0000 UTC m=+0.106244209 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:56:28 localhost podman[296144]: 2025-10-14 09:56:28.608148062 +0000 UTC m=+0.148504214 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 05:56:28 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:56:30 localhost sshd[296163]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:30 localhost podman[246584]: time="2025-10-14T09:56:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:56:30 localhost podman[246584]: @ - - [14/Oct/2025:09:56:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:56:30 localhost podman[246584]: @ - - [14/Oct/2025:09:56:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16867 "" "Go-http-client/1.1" Oct 14 05:56:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57279 DF PROTO=TCP SPT=42066 DPT=9102 SEQ=3718995378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762758690000000001030307) Oct 14 05:56:33 localhost openstack_network_exporter[248748]: ERROR 09:56:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:56:33 localhost openstack_network_exporter[248748]: ERROR 09:56:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:56:33 localhost openstack_network_exporter[248748]: ERROR 09:56:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:56:33 localhost openstack_network_exporter[248748]: ERROR 09:56:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:56:33 localhost openstack_network_exporter[248748]: Oct 14 05:56:33 localhost openstack_network_exporter[248748]: ERROR 09:56:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:56:33 localhost openstack_network_exporter[248748]: Oct 14 05:56:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:56:33 localhost podman[296165]: 2025-10-14 09:56:33.51040912 +0000 UTC m=+0.075773141 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible) Oct 14 05:56:33 localhost podman[296165]: 2025-10-14 09:56:33.548172981 +0000 UTC m=+0.113537002 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS) Oct 14 05:56:33 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:56:33 localhost sshd[296184]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:56:35 localhost podman[296186]: 2025-10-14 09:56:35.535617431 +0000 UTC m=+0.076657544 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64) Oct 14 05:56:35 localhost podman[296186]: 2025-10-14 09:56:35.551204484 +0000 UTC m=+0.092244647 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 05:56:35 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:56:36 localhost podman[296207]: 2025-10-14 09:56:36.28112274 +0000 UTC m=+0.076722826 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:56:36 localhost podman[296207]: 2025-10-14 09:56:36.340799632 +0000 UTC m=+0.136399758 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 05:56:36 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:56:37 localhost sshd[296232]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:56:37 localhost systemd[1]: tmp-crun.PNrkIZ.mount: Deactivated successfully. Oct 14 05:56:37 localhost podman[296234]: 2025-10-14 09:56:37.292425286 +0000 UTC m=+0.084807490 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:56:37 localhost podman[296234]: 2025-10-14 09:56:37.324619749 +0000 UTC m=+0.117001943 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:56:37 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:56:40 localhost sshd[296257]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:43 localhost sshd[296260]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:56:46 localhost podman[296263]: 2025-10-14 09:56:46.544391835 +0000 UTC m=+0.084741009 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:56:46 localhost podman[296263]: 2025-10-14 09:56:46.553694521 +0000 UTC m=+0.094043685 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 05:56:46 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:56:47 localhost sshd[296282]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:56:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:56:50 localhost sshd[296430]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:53 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:53.263 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 05:56:53 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:53.264 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 05:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:56:53 localhost podman[296433]: 2025-10-14 09:56:53.520476055 +0000 UTC m=+0.065136768 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:56:53 localhost podman[296433]: 2025-10-14 09:56:53.533192752 +0000 UTC m=+0.077853415 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:56:53 localhost podman[296432]: 2025-10-14 09:56:53.538636606 +0000 UTC m=+0.080818274 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 05:56:53 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:56:53 localhost podman[296432]: 2025-10-14 09:56:53.545502649 +0000 UTC m=+0.087684337 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 14 05:56:53 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:56:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46319 DF PROTO=TCP SPT=37332 DPT=9102 SEQ=914601990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7627B1BE0000000001030307) Oct 14 05:56:53 localhost sshd[296471]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46320 DF PROTO=TCP SPT=37332 DPT=9102 SEQ=914601990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7627B5A90000000001030307) Oct 14 05:56:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46321 DF PROTO=TCP SPT=37332 DPT=9102 SEQ=914601990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7627BDA90000000001030307) Oct 14 05:56:57 localhost nova_compute[295778]: 2025-10-14 09:56:57.190 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:56:57 localhost nova_compute[295778]: 2025-10-14 09:56:57.217 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:56:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:57.267 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 05:56:57 localhost sshd[296473]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:56:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:57.620 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:56:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:57.621 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:56:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:56:57.621 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:56:59 localhost systemd[1]: tmp-crun.hESIoX.mount: Deactivated successfully. Oct 14 05:56:59 localhost podman[296475]: 2025-10-14 09:56:59.543917727 +0000 UTC m=+0.085848098 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 05:56:59 localhost podman[296475]: 2025-10-14 09:56:59.557008244 +0000 UTC m=+0.098938655 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Oct 14 05:56:59 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:57:00 localhost podman[246584]: time="2025-10-14T09:57:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:57:00 localhost podman[246584]: @ - - [14/Oct/2025:09:57:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:57:00 localhost podman[246584]: @ - - [14/Oct/2025:09:57:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16866 "" "Go-http-client/1.1" Oct 14 05:57:00 localhost sshd[296496]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46322 DF PROTO=TCP SPT=37332 DPT=9102 SEQ=914601990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7627CD690000000001030307) Oct 14 05:57:03 localhost openstack_network_exporter[248748]: ERROR 09:57:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:57:03 localhost openstack_network_exporter[248748]: ERROR 09:57:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:57:03 localhost openstack_network_exporter[248748]: ERROR 09:57:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:57:03 localhost openstack_network_exporter[248748]: ERROR 09:57:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:57:03 localhost openstack_network_exporter[248748]: Oct 14 05:57:03 localhost openstack_network_exporter[248748]: ERROR 09:57:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:57:03 localhost openstack_network_exporter[248748]: Oct 14 05:57:04 localhost sshd[296498]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:57:04 localhost podman[296500]: 2025-10-14 09:57:04.501498476 +0000 UTC m=+0.079581472 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:57:04 localhost podman[296500]: 2025-10-14 09:57:04.535921319 +0000 UTC m=+0.114004305 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:57:04 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:57:06 localhost podman[296522]: 2025-10-14 09:57:06.543900083 +0000 UTC m=+0.083718631 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7) Oct 14 05:57:06 localhost podman[296522]: 2025-10-14 09:57:06.58414186 +0000 UTC m=+0.123960358 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Oct 14 05:57:06 localhost systemd[1]: tmp-crun.yMrXS8.mount: Deactivated successfully. Oct 14 05:57:06 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:57:06 localhost podman[296523]: 2025-10-14 09:57:06.604172331 +0000 UTC m=+0.142982472 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 05:57:06 localhost podman[296523]: 2025-10-14 09:57:06.640302849 +0000 UTC m=+0.179112960 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller) Oct 14 05:57:06 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:57:07 localhost podman[296565]: 2025-10-14 09:57:07.531370407 +0000 UTC m=+0.075550524 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:57:07 localhost podman[296565]: 2025-10-14 09:57:07.568175803 +0000 UTC m=+0.112355860 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:57:07 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:57:07 localhost sshd[296588]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:10 localhost sshd[296590]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.906 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.907 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.907 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.924 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.924 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.925 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.925 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.926 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.926 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.927 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.928 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.946 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.947 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.947 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.947 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:57:12 localhost nova_compute[295778]: 2025-10-14 09:57:12.948 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.404 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.590 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.592 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12763MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.592 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.593 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.658 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.659 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:57:13 localhost nova_compute[295778]: 2025-10-14 09:57:13.675 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:57:14 localhost nova_compute[295778]: 2025-10-14 09:57:14.178 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:57:14 localhost nova_compute[295778]: 2025-10-14 09:57:14.184 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:57:14 localhost nova_compute[295778]: 2025-10-14 09:57:14.206 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:57:14 localhost nova_compute[295778]: 2025-10-14 09:57:14.209 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:57:14 localhost nova_compute[295778]: 2025-10-14 09:57:14.209 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:57:14 localhost sshd[296637]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:57:17 localhost podman[296639]: 2025-10-14 09:57:17.546926294 +0000 UTC m=+0.090698476 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 05:57:17 localhost podman[296639]: 2025-10-14 09:57:17.564275004 +0000 UTC m=+0.108047196 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Oct 14 05:57:17 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:57:17 localhost sshd[296658]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:21 localhost sshd[296660]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39175 DF PROTO=TCP SPT=34870 DPT=9102 SEQ=2052787051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762826EE0000000001030307) Oct 14 05:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:57:24 localhost podman[296663]: 2025-10-14 09:57:24.345406968 +0000 UTC m=+0.084316927 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 14 05:57:24 localhost podman[296663]: 2025-10-14 09:57:24.35715015 +0000 UTC m=+0.096060599 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:57:24 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:57:24 localhost podman[296664]: 2025-10-14 09:57:24.404866866 +0000 UTC m=+0.137099467 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:57:24 localhost podman[296664]: 2025-10-14 09:57:24.415099746 +0000 UTC m=+0.147332327 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 05:57:24 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:57:24 localhost sshd[296705]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39176 DF PROTO=TCP SPT=34870 DPT=9102 SEQ=2052787051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76282AE90000000001030307) Oct 14 05:57:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39177 DF PROTO=TCP SPT=34870 DPT=9102 SEQ=2052787051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762832E90000000001030307) Oct 14 05:57:28 localhost sshd[296707]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:57:30 localhost podman[296709]: 2025-10-14 09:57:30.550900366 +0000 UTC m=+0.083919505 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 14 05:57:30 localhost podman[296709]: 2025-10-14 09:57:30.562060182 +0000 UTC m=+0.095079311 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Oct 14 05:57:30 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:57:30 localhost podman[246584]: time="2025-10-14T09:57:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:57:30 localhost podman[246584]: @ - - [14/Oct/2025:09:57:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:57:30 localhost podman[246584]: @ - - [14/Oct/2025:09:57:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16866 "" "Go-http-client/1.1" Oct 14 05:57:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39178 DF PROTO=TCP SPT=34870 DPT=9102 SEQ=2052787051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762842A90000000001030307) Oct 14 05:57:31 localhost sshd[296730]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:33 localhost openstack_network_exporter[248748]: ERROR 09:57:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:57:33 localhost openstack_network_exporter[248748]: ERROR 09:57:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:57:33 localhost openstack_network_exporter[248748]: ERROR 09:57:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:57:33 localhost openstack_network_exporter[248748]: ERROR 09:57:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:57:33 localhost openstack_network_exporter[248748]: Oct 14 05:57:33 localhost openstack_network_exporter[248748]: ERROR 09:57:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:57:33 localhost openstack_network_exporter[248748]: Oct 14 05:57:34 localhost sshd[296732]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:57:35 localhost podman[296734]: 2025-10-14 09:57:35.205340666 +0000 UTC m=+0.074627211 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 05:57:35 localhost podman[296734]: 2025-10-14 09:57:35.218129705 +0000 UTC m=+0.087416260 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:57:35 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:57:36 localhost podman[296753]: 2025-10-14 09:57:36.931008534 +0000 UTC m=+0.080624189 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 05:57:36 localhost podman[296753]: 2025-10-14 09:57:36.942240712 +0000 UTC m=+0.091856407 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 05:57:36 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:57:36 localhost podman[296754]: 2025-10-14 09:57:36.982044557 +0000 UTC m=+0.127689976 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 05:57:37 localhost podman[296754]: 2025-10-14 09:57:37.013972434 +0000 UTC m=+0.159617903 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 05:57:37 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:57:38 localhost sshd[296797]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:57:38 localhost systemd[1]: tmp-crun.6DjW2G.mount: Deactivated successfully. Oct 14 05:57:38 localhost podman[296799]: 2025-10-14 09:57:38.547960351 +0000 UTC m=+0.087261825 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:57:38 localhost podman[296799]: 2025-10-14 09:57:38.555493101 +0000 UTC m=+0.094794595 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:57:38 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:57:41 localhost sshd[296823]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:45 localhost sshd[296826]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:57:48 localhost systemd[1]: tmp-crun.uZzPZq.mount: Deactivated successfully. Oct 14 05:57:48 localhost podman[296828]: 2025-10-14 09:57:48.552111186 +0000 UTC m=+0.092017111 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:57:48 localhost podman[296828]: 2025-10-14 09:57:48.589496088 +0000 UTC m=+0.129402013 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:57:48 localhost sshd[296847]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:48 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:57:49 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:49.054 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 05:57:49 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:49.055 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 05:57:52 localhost sshd[296934]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:52 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:52.058 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 05:57:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35389 DF PROTO=TCP SPT=55652 DPT=9102 SEQ=3804750030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76289C1E0000000001030307) Oct 14 05:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:57:54 localhost podman[296936]: 2025-10-14 09:57:54.551961521 +0000 UTC m=+0.089306989 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 05:57:54 localhost podman[296936]: 2025-10-14 09:57:54.592111476 +0000 UTC m=+0.129456914 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 05:57:54 localhost podman[296937]: 2025-10-14 09:57:54.603495648 +0000 UTC m=+0.138582866 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:57:54 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:57:54 localhost podman[296937]: 2025-10-14 09:57:54.612453226 +0000 UTC m=+0.147540494 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:57:54 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:57:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35390 DF PROTO=TCP SPT=55652 DPT=9102 SEQ=3804750030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7628A0290000000001030307) Oct 14 05:57:55 localhost sshd[296979]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:57:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35391 DF PROTO=TCP SPT=55652 DPT=9102 SEQ=3804750030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7628A82A0000000001030307) Oct 14 05:57:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:57.621 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:57:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:57.621 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:57:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:57:57.621 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:57:58 localhost sshd[296981]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:00 localhost podman[246584]: time="2025-10-14T09:58:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:58:00 localhost podman[246584]: @ - - [14/Oct/2025:09:58:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:58:00 localhost podman[246584]: @ - - [14/Oct/2025:09:58:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16874 "" "Go-http-client/1.1" Oct 14 05:58:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:58:00 localhost podman[296983]: 2025-10-14 09:58:00.789749728 +0000 UTC m=+0.085378626 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 14 05:58:00 localhost podman[296983]: 2025-10-14 09:58:00.829119152 +0000 UTC m=+0.124748040 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3) Oct 14 05:58:00 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:58:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35392 DF PROTO=TCP SPT=55652 DPT=9102 SEQ=3804750030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7628B7E90000000001030307) Oct 14 05:58:03 localhost openstack_network_exporter[248748]: ERROR 09:58:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:58:03 localhost openstack_network_exporter[248748]: ERROR 09:58:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:58:03 localhost openstack_network_exporter[248748]: ERROR 09:58:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:58:03 localhost openstack_network_exporter[248748]: ERROR 09:58:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:58:03 localhost openstack_network_exporter[248748]: Oct 14 05:58:03 localhost openstack_network_exporter[248748]: ERROR 09:58:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:58:03 localhost openstack_network_exporter[248748]: Oct 14 05:58:04 localhost sshd[297000]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:58:05 localhost podman[297003]: 2025-10-14 09:58:05.552476957 +0000 UTC m=+0.088789595 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 14 05:58:05 localhost podman[297003]: 2025-10-14 09:58:05.590166116 +0000 UTC m=+0.126478744 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 05:58:05 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:58:07 localhost podman[297022]: 2025-10-14 09:58:07.115208525 +0000 UTC m=+0.087407788 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public) Oct 14 05:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:58:07 localhost podman[297022]: 2025-10-14 09:58:07.13313493 +0000 UTC m=+0.105334193 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public) Oct 14 05:58:07 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:58:07 localhost podman[297042]: 2025-10-14 09:58:07.217940218 +0000 UTC m=+0.080715990 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:58:07 localhost podman[297042]: 2025-10-14 09:58:07.286393604 +0000 UTC m=+0.149169376 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 05:58:07 localhost sshd[297067]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:07 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:58:09 localhost podman[297069]: 2025-10-14 09:58:09.538555383 +0000 UTC m=+0.078322038 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:58:09 localhost podman[297069]: 2025-10-14 09:58:09.57052822 +0000 UTC m=+0.110294785 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:58:09 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:58:10 localhost sshd[297093]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:13 localhost sshd[297096]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.203 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.204 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.225 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.225 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.226 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.247 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.247 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.247 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.248 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.248 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.248 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.249 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.249 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.276 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.277 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.277 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.278 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.278 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.737 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.936 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.938 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12752MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.939 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:58:14 localhost nova_compute[295778]: 2025-10-14 09:58:14.939 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.007 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.008 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.036 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.495 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.501 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.523 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.526 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:58:15 localhost nova_compute[295778]: 2025-10-14 09:58:15.526 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:58:16 localhost nova_compute[295778]: 2025-10-14 09:58:16.182 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:58:17 localhost sshd[297143]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:58:19 localhost systemd[1]: tmp-crun.GAAmse.mount: Deactivated successfully. Oct 14 05:58:19 localhost podman[297146]: 2025-10-14 09:58:19.555430675 +0000 UTC m=+0.093644044 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute) Oct 14 05:58:19 localhost podman[297146]: 2025-10-14 09:58:19.565626916 +0000 UTC m=+0.103840305 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 05:58:19 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:58:20 localhost sshd[297165]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:23 localhost sshd[297167]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52912 DF PROTO=TCP SPT=32848 DPT=9102 SEQ=2973230720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629114E0000000001030307) Oct 14 05:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:58:24 localhost podman[297171]: 2025-10-14 09:58:24.839345206 +0000 UTC m=+0.098137493 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 05:58:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52913 DF PROTO=TCP SPT=32848 DPT=9102 SEQ=2973230720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762915690000000001030307) Oct 14 05:58:24 localhost podman[297171]: 2025-10-14 09:58:24.850031719 +0000 UTC m=+0.108824056 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:58:24 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:58:24 localhost podman[297170]: 2025-10-14 09:58:24.81423072 +0000 UTC m=+0.077013323 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:58:24 localhost podman[297170]: 2025-10-14 09:58:24.898183577 +0000 UTC m=+0.160966160 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 05:58:24 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:58:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52914 DF PROTO=TCP SPT=32848 DPT=9102 SEQ=2973230720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76291D6A0000000001030307) Oct 14 05:58:27 localhost sshd[297210]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:30 localhost sshd[297212]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:30 localhost podman[246584]: time="2025-10-14T09:58:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:58:30 localhost podman[246584]: @ - - [14/Oct/2025:09:58:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:58:30 localhost podman[246584]: @ - - [14/Oct/2025:09:58:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16872 "" "Go-http-client/1.1" Oct 14 05:58:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52915 DF PROTO=TCP SPT=32848 DPT=9102 SEQ=2973230720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76292D290000000001030307) Oct 14 05:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:58:31 localhost podman[297214]: 2025-10-14 09:58:31.35434012 +0000 UTC m=+0.074826114 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 14 05:58:31 localhost podman[297214]: 2025-10-14 09:58:31.369102112 +0000 UTC m=+0.089588146 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 05:58:31 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:58:33 localhost openstack_network_exporter[248748]: ERROR 09:58:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:58:33 localhost openstack_network_exporter[248748]: ERROR 09:58:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:58:33 localhost openstack_network_exporter[248748]: ERROR 09:58:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:58:33 localhost openstack_network_exporter[248748]: ERROR 09:58:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:58:33 localhost openstack_network_exporter[248748]: Oct 14 05:58:33 localhost openstack_network_exporter[248748]: ERROR 09:58:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:58:33 localhost openstack_network_exporter[248748]: Oct 14 05:58:33 localhost sshd[297233]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:58:36 localhost systemd[1]: tmp-crun.lU6ESZ.mount: Deactivated successfully. Oct 14 05:58:36 localhost podman[297236]: 2025-10-14 09:58:36.539418411 +0000 UTC m=+0.079568171 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:58:36 localhost podman[297236]: 2025-10-14 09:58:36.548815011 +0000 UTC m=+0.088964751 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 05:58:36 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:58:36 localhost sshd[297254]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:58:37 localhost podman[297256]: 2025-10-14 09:58:37.533449579 +0000 UTC m=+0.077528517 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64) Oct 14 05:58:37 localhost podman[297256]: 2025-10-14 09:58:37.545846258 +0000 UTC m=+0.089925176 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible) Oct 14 05:58:37 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:58:37 localhost podman[297257]: 2025-10-14 09:58:37.642975113 +0000 UTC m=+0.184281577 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:58:37 localhost systemd[1]: tmp-crun.VdLceG.mount: Deactivated successfully. Oct 14 05:58:37 localhost podman[297257]: 2025-10-14 09:58:37.702926073 +0000 UTC m=+0.244232547 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 05:58:37 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:58:38 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 14 05:58:40 localhost sshd[297302]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:58:40 localhost podman[297304]: 2025-10-14 09:58:40.388413322 +0000 UTC m=+0.077540667 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 05:58:40 localhost podman[297304]: 2025-10-14 09:58:40.422467175 +0000 UTC m=+0.111594540 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:58:40 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:58:43 localhost sshd[297326]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:46 localhost sshd[297328]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:58:49 localhost podman[297331]: 2025-10-14 09:58:49.8296005 +0000 UTC m=+0.083409572 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:58:49 localhost podman[297331]: 2025-10-14 09:58:49.846144999 +0000 UTC m=+0.099954041 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Oct 14 05:58:49 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 09:58:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 05:58:50 localhost sshd[297351]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:52 localhost podman[297459]: 2025-10-14 09:58:52.492791079 +0000 UTC m=+0.098457572 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 05:58:52 localhost podman[297459]: 2025-10-14 09:58:52.601118571 +0000 UTC m=+0.206785054 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 05:58:53 localhost sshd[297565]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63993 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=1409566212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629867E0000000001030307) Oct 14 05:58:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63994 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=1409566212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A76298A690000000001030307) Oct 14 05:58:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:58:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:58:55 localhost podman[297615]: 2025-10-14 09:58:55.552558333 +0000 UTC m=+0.087617014 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:58:55 localhost podman[297615]: 2025-10-14 09:58:55.592366598 +0000 UTC m=+0.127425279 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:58:55 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:58:55 localhost podman[297614]: 2025-10-14 09:58:55.601419089 +0000 UTC m=+0.136456590 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:58:55 localhost podman[297614]: 2025-10-14 09:58:55.686091694 +0000 UTC m=+0.221129225 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 05:58:55 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:58:56 localhost sshd[297653]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:58:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63995 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=1409566212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629926A0000000001030307) Oct 14 05:58:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:58:57.622 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:58:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:58:57.623 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:58:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:58:57.623 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:58:59 localhost sshd[297656]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:00 localhost podman[246584]: time="2025-10-14T09:59:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:59:00 localhost podman[246584]: @ - - [14/Oct/2025:09:59:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:59:00 localhost podman[246584]: @ - - [14/Oct/2025:09:59:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16866 "" "Go-http-client/1.1" Oct 14 05:59:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63996 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=1409566212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629A2290000000001030307) Oct 14 05:59:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:59:01 localhost podman[297658]: 2025-10-14 09:59:01.544363934 +0000 UTC m=+0.084824800 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Oct 14 05:59:01 localhost podman[297658]: 2025-10-14 09:59:01.556926968 +0000 UTC m=+0.097387844 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 05:59:01 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:59:03 localhost sshd[297678]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:03 localhost openstack_network_exporter[248748]: ERROR 09:59:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:59:03 localhost openstack_network_exporter[248748]: ERROR 09:59:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:59:03 localhost openstack_network_exporter[248748]: ERROR 09:59:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:59:03 localhost openstack_network_exporter[248748]: ERROR 09:59:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:59:03 localhost openstack_network_exporter[248748]: Oct 14 05:59:03 localhost openstack_network_exporter[248748]: ERROR 09:59:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:59:03 localhost openstack_network_exporter[248748]: Oct 14 05:59:06 localhost sshd[297680]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:59:06 localhost podman[297682]: 2025-10-14 09:59:06.729433445 +0000 UTC m=+0.077442975 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 05:59:06 localhost podman[297682]: 2025-10-14 09:59:06.762483531 +0000 UTC m=+0.110493091 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251009) Oct 14 05:59:06 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:59:08 localhost sshd[297723]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:08 localhost podman[297701]: 2025-10-14 09:59:08.544275448 +0000 UTC m=+0.083062734 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public) Oct 14 05:59:08 localhost podman[297701]: 2025-10-14 09:59:08.585234363 +0000 UTC m=+0.124021629 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, vcs-type=git, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 05:59:08 localhost podman[297702]: 2025-10-14 09:59:08.597903999 +0000 UTC m=+0.133415659 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009) Oct 14 05:59:08 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:59:08 localhost systemd-logind[760]: New session 64 of user zuul. Oct 14 05:59:08 localhost systemd[1]: Started Session 64 of User zuul. Oct 14 05:59:08 localhost podman[297702]: 2025-10-14 09:59:08.711845941 +0000 UTC m=+0.247357601 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 05:59:08 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:59:08 localhost python3[297768]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 05:59:09 localhost subscription-manager[297769]: Unregistered machine with identity: fafbc9b5-5bcf-4c9b-8ea7-ff83fa6c70ff Oct 14 05:59:09 localhost sshd[297771]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:59:10 localhost podman[297773]: 2025-10-14 09:59:10.5589737 +0000 UTC m=+0.090230583 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 05:59:10 localhost podman[297773]: 2025-10-14 09:59:10.568178914 +0000 UTC m=+0.099435797 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 05:59:10 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:59:12 localhost nova_compute[295778]: 2025-10-14 09:59:12.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:12 localhost nova_compute[295778]: 2025-10-14 09:59:12.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 05:59:12 localhost nova_compute[295778]: 2025-10-14 09:59:12.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 05:59:12 localhost nova_compute[295778]: 2025-10-14 09:59:12.926 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 05:59:13 localhost sshd[297796]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.933 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.933 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.934 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.934 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 05:59:13 localhost nova_compute[295778]: 2025-10-14 09:59:13.935 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.393 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.567 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.569 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12744MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.569 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.569 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.666 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.666 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 05:59:14 localhost nova_compute[295778]: 2025-10-14 09:59:14.701 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 05:59:15 localhost nova_compute[295778]: 2025-10-14 09:59:15.152 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 05:59:15 localhost nova_compute[295778]: 2025-10-14 09:59:15.159 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 05:59:15 localhost nova_compute[295778]: 2025-10-14 09:59:15.182 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 05:59:15 localhost nova_compute[295778]: 2025-10-14 09:59:15.184 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 05:59:15 localhost nova_compute[295778]: 2025-10-14 09:59:15.185 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:59:16 localhost nova_compute[295778]: 2025-10-14 09:59:16.185 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:16 localhost nova_compute[295778]: 2025-10-14 09:59:16.186 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:16 localhost nova_compute[295778]: 2025-10-14 09:59:16.186 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:16 localhost nova_compute[295778]: 2025-10-14 09:59:16.187 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:16 localhost nova_compute[295778]: 2025-10-14 09:59:16.187 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 05:59:16 localhost sshd[297842]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:19 localhost sshd[297844]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:59:19 localhost podman[297846]: 2025-10-14 09:59:19.983441723 +0000 UTC m=+0.081679347 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:59:19 localhost podman[297846]: 2025-10-14 09:59:19.992523554 +0000 UTC m=+0.090761188 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 05:59:20 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:59:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23961 DF PROTO=TCP SPT=41964 DPT=9102 SEQ=2567007153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629FBAE0000000001030307) Oct 14 05:59:24 localhost sshd[297864]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23962 DF PROTO=TCP SPT=41964 DPT=9102 SEQ=2567007153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7629FFA90000000001030307) Oct 14 05:59:26 localhost sshd[297866]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:59:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:59:26 localhost podman[297868]: 2025-10-14 09:59:26.552157892 +0000 UTC m=+0.089040542 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 05:59:26 localhost podman[297868]: 2025-10-14 09:59:26.557053122 +0000 UTC m=+0.093935802 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 05:59:26 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:59:26 localhost systemd[1]: tmp-crun.ftaaki.mount: Deactivated successfully. Oct 14 05:59:26 localhost podman[297869]: 2025-10-14 09:59:26.602050185 +0000 UTC m=+0.133157642 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:59:26 localhost podman[297869]: 2025-10-14 09:59:26.638140032 +0000 UTC m=+0.169247489 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 05:59:26 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:59:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23963 DF PROTO=TCP SPT=41964 DPT=9102 SEQ=2567007153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762A07AA0000000001030307) Oct 14 05:59:29 localhost sshd[297909]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:30 localhost podman[246584]: time="2025-10-14T09:59:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 05:59:30 localhost podman[246584]: @ - - [14/Oct/2025:09:59:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 05:59:30 localhost podman[246584]: @ - - [14/Oct/2025:09:59:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16870 "" "Go-http-client/1.1" Oct 14 05:59:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:6c:d8:2b MACDST=fa:16:3e:37:09:c3 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23964 DF PROTO=TCP SPT=41964 DPT=9102 SEQ=2567007153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A762A176A0000000001030307) Oct 14 05:59:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 05:59:32 localhost podman[297912]: 2025-10-14 09:59:32.543862552 +0000 UTC m=+0.085155920 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 05:59:32 localhost podman[297912]: 2025-10-14 09:59:32.585021844 +0000 UTC m=+0.126315172 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 14 05:59:32 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 05:59:32 localhost sshd[297931]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:33 localhost openstack_network_exporter[248748]: ERROR 09:59:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:59:33 localhost openstack_network_exporter[248748]: ERROR 09:59:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 05:59:33 localhost openstack_network_exporter[248748]: ERROR 09:59:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 05:59:33 localhost openstack_network_exporter[248748]: ERROR 09:59:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 05:59:33 localhost openstack_network_exporter[248748]: Oct 14 05:59:33 localhost openstack_network_exporter[248748]: ERROR 09:59:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 05:59:33 localhost openstack_network_exporter[248748]: Oct 14 05:59:36 localhost sshd[297933]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 05:59:37 localhost podman[297935]: 2025-10-14 09:59:37.546510706 +0000 UTC m=+0.085741645 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:59:37 localhost podman[297935]: 2025-10-14 09:59:37.561292777 +0000 UTC m=+0.100523716 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 14 05:59:37 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 05:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 05:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 05:59:39 localhost sshd[297978]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:39 localhost podman[297955]: 2025-10-14 09:59:39.539917333 +0000 UTC m=+0.083713421 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container) Oct 14 05:59:39 localhost podman[297955]: 2025-10-14 09:59:39.556143963 +0000 UTC m=+0.099940041 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 05:59:39 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 05:59:39 localhost podman[297956]: 2025-10-14 09:59:39.648945514 +0000 UTC m=+0.186489546 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 05:59:39 localhost podman[297956]: 2025-10-14 09:59:39.714156573 +0000 UTC m=+0.251700615 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 05:59:39 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 05:59:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 05:59:40 localhost podman[298019]: 2025-10-14 09:59:40.736886433 +0000 UTC m=+0.080959978 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:59:40 localhost podman[298019]: 2025-10-14 09:59:40.774163521 +0000 UTC m=+0.118237066 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 05:59:40 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 05:59:41 localhost sshd[298080]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:42 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 14 05:59:42 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 14 05:59:42 localhost systemd-logind[760]: New session 65 of user tripleo-admin. Oct 14 05:59:42 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 14 05:59:42 localhost systemd[1]: Starting User Manager for UID 1003... Oct 14 05:59:42 localhost systemd[298084]: Queued start job for default target Main User Target. Oct 14 05:59:42 localhost systemd[298084]: Created slice User Application Slice. Oct 14 05:59:42 localhost systemd[298084]: Started Mark boot as successful after the user session has run 2 minutes. Oct 14 05:59:42 localhost systemd[298084]: Started Daily Cleanup of User's Temporary Directories. Oct 14 05:59:42 localhost systemd[298084]: Reached target Paths. Oct 14 05:59:42 localhost systemd[298084]: Reached target Timers. Oct 14 05:59:42 localhost systemd[298084]: Starting D-Bus User Message Bus Socket... Oct 14 05:59:42 localhost systemd[298084]: Starting Create User's Volatile Files and Directories... Oct 14 05:59:42 localhost systemd[298084]: Listening on D-Bus User Message Bus Socket. Oct 14 05:59:42 localhost systemd[298084]: Reached target Sockets. Oct 14 05:59:42 localhost systemd[298084]: Finished Create User's Volatile Files and Directories. Oct 14 05:59:42 localhost systemd[298084]: Reached target Basic System. Oct 14 05:59:42 localhost systemd[298084]: Reached target Main User Target. Oct 14 05:59:42 localhost systemd[298084]: Startup finished in 148ms. Oct 14 05:59:42 localhost systemd[1]: Started User Manager for UID 1003. Oct 14 05:59:42 localhost systemd-journald[47332]: Field hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 14 05:59:42 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 05:59:42 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:59:42 localhost systemd[1]: Started Session 65 of User tripleo-admin. Oct 14 05:59:42 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 05:59:42 localhost sshd[298175]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:43 localhost python3[298230]: ansible-ansible.builtin.systemd Invoked with name=iptables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:59:44 localhost python3[298375]: ansible-ansible.builtin.systemd Invoked with name=nftables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 14 05:59:44 localhost systemd[1]: Stopping Netfilter Tables... Oct 14 05:59:44 localhost systemd[1]: nftables.service: Deactivated successfully. Oct 14 05:59:44 localhost systemd[1]: Stopped Netfilter Tables. Oct 14 05:59:45 localhost python3[298523]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/tripleo-rules.nft block=# 100 ceph_alertmanager {'dport': [9093]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard {'dport': [8443]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana {'dport': [3100]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus {'dport': [9092]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw {'dport': ['8080']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon {'dport': [6789, 3300, '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds {'dport': ['6800-7300', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr {'dport': ['6800-7300', 8444]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs {'dport': ['12049', '2049']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 2049 } ct state new counter accept comment "120 ceph_nfs"#012# 122 ceph rgw {'dport': ['8080', '8080', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8080,9100 } ct state new counter accept comment "122 ceph rgw"#012# 123 ceph_dashboard {'dport': [3100, 9090, 9092, 9093, 9094, 9100, 9283]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100,9090,9092,9093,9094,9100,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 05:59:46 localhost sshd[298541]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:49 localhost sshd[298543]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 05:59:50 localhost podman[298545]: 2025-10-14 09:59:50.525866572 +0000 UTC m=+0.065337523 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_id=edpm, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 05:59:50 localhost podman[298545]: 2025-10-14 09:59:50.53522772 +0000 UTC m=+0.074698741 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Oct 14 05:59:50 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 05:59:52 localhost sshd[298565]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:56 localhost sshd[298654]: main: sshd: ssh-rsa algorithm is disabled Oct 14 05:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 05:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 05:59:57 localhost systemd[1]: tmp-crun.cpJTeu.mount: Deactivated successfully. Oct 14 05:59:57 localhost podman[298693]: 2025-10-14 09:59:57.194608103 +0000 UTC m=+0.085008545 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 05:59:57 localhost podman[298693]: 2025-10-14 09:59:57.233226157 +0000 UTC m=+0.123626599 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 05:59:57 localhost podman[298692]: 2025-10-14 09:59:57.244285651 +0000 UTC m=+0.138148084 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 05:59:57 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 05:59:57 localhost podman[298692]: 2025-10-14 09:59:57.315070717 +0000 UTC m=+0.208933170 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 05:59:57 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 05:59:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:59:57.622 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 05:59:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:59:57.623 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 05:59:57 localhost ovn_metadata_agent[161927]: 2025-10-14 09:59:57.623 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 05:59:59 localhost sshd[298751]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:00 localhost podman[246584]: time="2025-10-14T10:00:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:00:00 localhost podman[246584]: @ - - [14/Oct/2025:10:00:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136323 "" "Go-http-client/1.1" Oct 14 06:00:00 localhost podman[246584]: @ - - [14/Oct/2025:10:00:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16867 "" "Go-http-client/1.1" Oct 14 06:00:02 localhost sshd[298789]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:00:03 localhost podman[298791]: 2025-10-14 10:00:03.082609092 +0000 UTC m=+0.083750072 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3) Oct 14 06:00:03 localhost podman[298791]: 2025-10-14 10:00:03.121195355 +0000 UTC m=+0.122336325 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:00:03 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:00:03 localhost openstack_network_exporter[248748]: ERROR 10:00:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:00:03 localhost openstack_network_exporter[248748]: ERROR 10:00:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:00:03 localhost openstack_network_exporter[248748]: ERROR 10:00:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:00:03 localhost openstack_network_exporter[248748]: ERROR 10:00:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:00:03 localhost openstack_network_exporter[248748]: Oct 14 06:00:03 localhost openstack_network_exporter[248748]: ERROR 10:00:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:00:03 localhost openstack_network_exporter[248748]: Oct 14 06:00:06 localhost sshd[298810]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:07 localhost podman[298891]: Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.614401691 +0000 UTC m=+0.075149834 container create 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, release=553, ceph=True) Oct 14 06:00:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:00:07 localhost systemd[1]: Started libpod-conmon-9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e.scope. Oct 14 06:00:07 localhost systemd[1]: Started libcrun container. Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.584176129 +0000 UTC m=+0.044924312 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.695247125 +0000 UTC m=+0.155995258 container init 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:00:07 localhost kind_hugle[298907]: 167 167 Oct 14 06:00:07 localhost systemd[1]: libpod-9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e.scope: Deactivated successfully. Oct 14 06:00:07 localhost podman[298906]: 2025-10-14 10:00:07.744598133 +0000 UTC m=+0.093565672 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:00:07 localhost podman[298906]: 2025-10-14 10:00:07.759157449 +0000 UTC m=+0.108124978 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.764193042 +0000 UTC m=+0.224941195 container start 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.764577502 +0000 UTC m=+0.225325645 container attach 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, release=553, maintainer=Guillaume Abrioux , ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=) Oct 14 06:00:07 localhost podman[298891]: 2025-10-14 10:00:07.766875413 +0000 UTC m=+0.227623616 container died 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, release=553, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Oct 14 06:00:07 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:00:07 localhost podman[298924]: 2025-10-14 10:00:07.867951393 +0000 UTC m=+0.137745943 container remove 9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hugle, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:00:07 localhost systemd[1]: libpod-conmon-9849d3073d96f21e8217274f1610d4a2222245ada172441fd3aff24cb4d0a09e.scope: Deactivated successfully. Oct 14 06:00:07 localhost systemd[1]: Reloading. Oct 14 06:00:08 localhost systemd-sysv-generator[298976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:00:08 localhost systemd-rc-local-generator[298972]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:00:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:00:08 localhost systemd[1]: var-lib-containers-storage-overlay-e6296ab194aa1dfb77d7669f977b8ed2808697e7c055f15acdf519140d0b911a-merged.mount: Deactivated successfully. Oct 14 06:00:08 localhost systemd[1]: tmp-crun.VeN71a.mount: Deactivated successfully. Oct 14 06:00:08 localhost systemd[1]: Reloading. Oct 14 06:00:08 localhost systemd-rc-local-generator[299016]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:00:08 localhost systemd-sysv-generator[299020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:00:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:00:08 localhost systemd[1]: Starting Ceph mds.mds.np0005486731.onyaog for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 06:00:08 localhost podman[299078]: Oct 14 06:00:09 localhost podman[299078]: 2025-10-14 10:00:09.009189936 +0000 UTC m=+0.075520154 container create 5454859a5ca188d983f623f0cb3524126c3f1692749be7d9192868cf89bd893c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mds-mds-np0005486731-onyaog, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9161b21fe42f78b58b23194866e353e9648cdb29959a3bd0149309a94a451a8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9161b21fe42f78b58b23194866e353e9648cdb29959a3bd0149309a94a451a8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 06:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9161b21fe42f78b58b23194866e353e9648cdb29959a3bd0149309a94a451a8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 06:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9161b21fe42f78b58b23194866e353e9648cdb29959a3bd0149309a94a451a8f/merged/var/lib/ceph/mds/ceph-mds.np0005486731.onyaog supports timestamps until 2038 (0x7fffffff) Oct 14 06:00:09 localhost podman[299078]: 2025-10-14 10:00:09.069815293 +0000 UTC m=+0.136145501 container init 5454859a5ca188d983f623f0cb3524126c3f1692749be7d9192868cf89bd893c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mds-mds-np0005486731-onyaog, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Oct 14 06:00:09 localhost podman[299078]: 2025-10-14 10:00:09.077820475 +0000 UTC m=+0.144150673 container start 5454859a5ca188d983f623f0cb3524126c3f1692749be7d9192868cf89bd893c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mds-mds-np0005486731-onyaog, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, ceph=True, vendor=Red Hat, Inc., version=7) Oct 14 06:00:09 localhost bash[299078]: 5454859a5ca188d983f623f0cb3524126c3f1692749be7d9192868cf89bd893c Oct 14 06:00:09 localhost podman[299078]: 2025-10-14 10:00:08.97920498 +0000 UTC m=+0.045535178 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:00:09 localhost systemd[1]: Started Ceph mds.mds.np0005486731.onyaog for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 06:00:09 localhost ceph-mds[299096]: set uid:gid to 167:167 (ceph:ceph) Oct 14 06:00:09 localhost ceph-mds[299096]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Oct 14 06:00:09 localhost ceph-mds[299096]: main not setting numa affinity Oct 14 06:00:09 localhost ceph-mds[299096]: pidfile_write: ignore empty --pid-file Oct 14 06:00:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mds-mds-np0005486731-onyaog[299092]: starting mds.mds.np0005486731.onyaog at Oct 14 06:00:09 localhost ceph-mds[299096]: mds.mds.np0005486731.onyaog Updating MDS map to version 8 from mon.0 Oct 14 06:00:09 localhost sshd[299132]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:09 localhost systemd[1]: session-64.scope: Deactivated successfully. Oct 14 06:00:09 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Oct 14 06:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:00:09 localhost systemd-logind[760]: Removed session 64. Oct 14 06:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:00:09 localhost podman[299135]: 2025-10-14 10:00:09.753894493 +0000 UTC m=+0.093984054 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.) Oct 14 06:00:09 localhost podman[299135]: 2025-10-14 10:00:09.769171557 +0000 UTC m=+0.109261068 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:00:09 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:00:09 localhost podman[299170]: 2025-10-14 10:00:09.850183036 +0000 UTC m=+0.088275653 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Oct 14 06:00:09 localhost podman[299170]: 2025-10-14 10:00:09.949932641 +0000 UTC m=+0.188025258 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:00:09 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:00:10 localhost ceph-mds[299096]: mds.mds.np0005486731.onyaog Updating MDS map to version 9 from mon.0 Oct 14 06:00:10 localhost ceph-mds[299096]: mds.mds.np0005486731.onyaog Monitors have assigned me to become a standby. Oct 14 06:00:10 localhost podman[299293]: 2025-10-14 10:00:10.664590101 +0000 UTC m=+0.096948572 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, version=7, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:00:10 localhost podman[299293]: 2025-10-14 10:00:10.800146385 +0000 UTC m=+0.232504816 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:00:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:00:10 localhost podman[299325]: 2025-10-14 10:00:10.932567126 +0000 UTC m=+0.080200397 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:00:10 localhost podman[299325]: 2025-10-14 10:00:10.944626967 +0000 UTC m=+0.092260268 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:00:10 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:00:12 localhost nova_compute[295778]: 2025-10-14 10:00:12.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:12 localhost sshd[299435]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.933 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.934 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.934 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.934 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:00:13 localhost nova_compute[295778]: 2025-10-14 10:00:13.935 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.349 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.540 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.542 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12736MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.543 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.543 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.606 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.607 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:00:14 localhost nova_compute[295778]: 2025-10-14 10:00:14.630 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:00:15 localhost nova_compute[295778]: 2025-10-14 10:00:15.098 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:00:15 localhost nova_compute[295778]: 2025-10-14 10:00:15.105 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:00:15 localhost nova_compute[295778]: 2025-10-14 10:00:15.122 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:00:15 localhost nova_compute[295778]: 2025-10-14 10:00:15.125 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:00:15 localhost nova_compute[295778]: 2025-10-14 10:00:15.125 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.125 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.126 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.126 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:00:16 localhost sshd[299481]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.144 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.144 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.144 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.145 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.145 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.146 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:00:16 localhost nova_compute[295778]: 2025-10-14 10:00:16.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:17 localhost nova_compute[295778]: 2025-10-14 10:00:17.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:00:19 localhost sshd[299483]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:00:21 localhost podman[299486]: 2025-10-14 10:00:21.554914703 +0000 UTC m=+0.089439992 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm) Oct 14 06:00:21 localhost podman[299486]: 2025-10-14 10:00:21.591333369 +0000 UTC m=+0.125858658 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:00:21 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:00:22 localhost sshd[299505]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:26 localhost sshd[299507]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:00:27 localhost podman[299510]: 2025-10-14 10:00:27.532040507 +0000 UTC m=+0.076020587 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:00:27 localhost podman[299510]: 2025-10-14 10:00:27.541153098 +0000 UTC m=+0.085133228 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009) Oct 14 06:00:27 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:00:27 localhost podman[299511]: 2025-10-14 10:00:27.595499079 +0000 UTC m=+0.133280174 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:00:27 localhost podman[299511]: 2025-10-14 10:00:27.648843664 +0000 UTC m=+0.186624769 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:00:27 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:00:29 localhost sshd[299551]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:30 localhost podman[246584]: time="2025-10-14T10:00:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:00:30 localhost podman[246584]: @ - - [14/Oct/2025:10:00:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138401 "" "Go-http-client/1.1" Oct 14 06:00:30 localhost podman[246584]: @ - - [14/Oct/2025:10:00:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17356 "" "Go-http-client/1.1" Oct 14 06:00:32 localhost sshd[299553]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:33 localhost openstack_network_exporter[248748]: ERROR 10:00:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:00:33 localhost openstack_network_exporter[248748]: ERROR 10:00:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:00:33 localhost openstack_network_exporter[248748]: ERROR 10:00:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:00:33 localhost openstack_network_exporter[248748]: ERROR 10:00:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:00:33 localhost openstack_network_exporter[248748]: Oct 14 06:00:33 localhost openstack_network_exporter[248748]: ERROR 10:00:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:00:33 localhost openstack_network_exporter[248748]: Oct 14 06:00:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:00:33 localhost podman[299555]: 2025-10-14 10:00:33.559307008 +0000 UTC m=+0.090120931 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:00:33 localhost podman[299555]: 2025-10-14 10:00:33.574146842 +0000 UTC m=+0.104960765 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd) Oct 14 06:00:33 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:00:36 localhost sshd[299575]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:37 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 14 06:00:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:00:38 localhost podman[299577]: 2025-10-14 10:00:38.196561632 +0000 UTC m=+0.085004655 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true) Oct 14 06:00:38 localhost podman[299577]: 2025-10-14 10:00:38.207025309 +0000 UTC m=+0.095468292 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid) Oct 14 06:00:38 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:00:39 localhost sshd[299596]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:00:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:00:40 localhost podman[299598]: 2025-10-14 10:00:40.552881533 +0000 UTC m=+0.091467156 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Oct 14 06:00:40 localhost podman[299598]: 2025-10-14 10:00:40.565106737 +0000 UTC m=+0.103692430 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm) Oct 14 06:00:40 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:00:40 localhost systemd[1]: tmp-crun.lKaHeT.mount: Deactivated successfully. Oct 14 06:00:40 localhost podman[299599]: 2025-10-14 10:00:40.668666863 +0000 UTC m=+0.204412891 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:00:40 localhost podman[299599]: 2025-10-14 10:00:40.729409254 +0000 UTC m=+0.265155232 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:00:40 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:00:41 localhost podman[299643]: 2025-10-14 10:00:41.531666697 +0000 UTC m=+0.076751535 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:00:41 localhost podman[299643]: 2025-10-14 10:00:41.563525123 +0000 UTC m=+0.108609951 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:00:41 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:00:42 localhost sshd[299667]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:44 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Oct 14 06:00:44 localhost systemd[1]: session-65.scope: Deactivated successfully. Oct 14 06:00:44 localhost systemd[1]: session-65.scope: Consumed 1.933s CPU time. Oct 14 06:00:44 localhost systemd-logind[760]: Removed session 65. Oct 14 06:00:45 localhost sshd[299688]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:49 localhost sshd[299690]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:00:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:00:52 localhost systemd[1]: tmp-crun.9dzgj3.mount: Deactivated successfully. Oct 14 06:00:52 localhost podman[299693]: 2025-10-14 10:00:52.446691902 +0000 UTC m=+0.098499263 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Oct 14 06:00:52 localhost podman[299693]: 2025-10-14 10:00:52.485184073 +0000 UTC m=+0.136991384 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 06:00:52 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:00:52 localhost sshd[299713]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:54 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 14 06:00:54 localhost systemd[298084]: Activating special unit Exit the Session... Oct 14 06:00:54 localhost systemd[298084]: Stopped target Main User Target. Oct 14 06:00:54 localhost systemd[298084]: Stopped target Basic System. Oct 14 06:00:54 localhost systemd[298084]: Stopped target Paths. Oct 14 06:00:54 localhost systemd[298084]: Stopped target Sockets. Oct 14 06:00:54 localhost systemd[298084]: Stopped target Timers. Oct 14 06:00:54 localhost systemd[298084]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 14 06:00:54 localhost systemd[298084]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 06:00:54 localhost systemd[298084]: Closed D-Bus User Message Bus Socket. Oct 14 06:00:54 localhost systemd[298084]: Stopped Create User's Volatile Files and Directories. Oct 14 06:00:54 localhost systemd[298084]: Removed slice User Application Slice. Oct 14 06:00:54 localhost systemd[298084]: Reached target Shutdown. Oct 14 06:00:54 localhost systemd[298084]: Finished Exit the Session. Oct 14 06:00:54 localhost systemd[298084]: Reached target Exit the Session. Oct 14 06:00:54 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 14 06:00:54 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 14 06:00:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 14 06:00:54 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 14 06:00:54 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 14 06:00:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 14 06:00:54 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 14 06:00:54 localhost systemd[1]: user-1003.slice: Consumed 2.314s CPU time. Oct 14 06:00:55 localhost sshd[299716]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:00:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:00:57.623 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:00:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:00:57.624 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:00:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:00:57.625 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:00:57 localhost systemd[1]: tmp-crun.UvFbjz.mount: Deactivated successfully. Oct 14 06:00:57 localhost podman[299754]: 2025-10-14 10:00:57.750261208 +0000 UTC m=+0.094424493 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:00:57 localhost podman[299754]: 2025-10-14 10:00:57.761215722 +0000 UTC m=+0.105378967 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:00:57 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:00:57 localhost systemd[1]: tmp-crun.ooWWI0.mount: Deactivated successfully. Oct 14 06:00:57 localhost podman[299789]: 2025-10-14 10:00:57.860600217 +0000 UTC m=+0.105266544 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:00:57 localhost podman[299789]: 2025-10-14 10:00:57.869262829 +0000 UTC m=+0.113929106 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:00:57 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:00:59 localhost sshd[299847]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:00 localhost podman[246584]: time="2025-10-14T10:01:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:01:00 localhost podman[246584]: @ - - [14/Oct/2025:10:01:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138401 "" "Go-http-client/1.1" Oct 14 06:01:00 localhost podman[246584]: @ - - [14/Oct/2025:10:01:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17357 "" "Go-http-client/1.1" Oct 14 06:01:02 localhost sshd[299897]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:03 localhost openstack_network_exporter[248748]: ERROR 10:01:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:01:03 localhost openstack_network_exporter[248748]: ERROR 10:01:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:01:03 localhost openstack_network_exporter[248748]: ERROR 10:01:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:01:03 localhost openstack_network_exporter[248748]: ERROR 10:01:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:01:03 localhost openstack_network_exporter[248748]: Oct 14 06:01:03 localhost openstack_network_exporter[248748]: ERROR 10:01:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:01:03 localhost openstack_network_exporter[248748]: Oct 14 06:01:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:01:04 localhost podman[299899]: 2025-10-14 10:01:04.545231424 +0000 UTC m=+0.084163348 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:01:04 localhost podman[299899]: 2025-10-14 10:01:04.561085159 +0000 UTC m=+0.100017093 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:01:04 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:01:05 localhost sshd[299918]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:01:08 localhost podman[299920]: 2025-10-14 10:01:08.523888425 +0000 UTC m=+0.065057936 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 14 06:01:08 localhost podman[299920]: 2025-10-14 10:01:08.53414087 +0000 UTC m=+0.075310371 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 06:01:08 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:01:09 localhost sshd[299939]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:01:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:01:11 localhost podman[299941]: 2025-10-14 10:01:11.542381991 +0000 UTC m=+0.081994119 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7) Oct 14 06:01:11 localhost podman[299941]: 2025-10-14 10:01:11.554770343 +0000 UTC m=+0.094382501 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 14 06:01:11 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:01:11 localhost podman[299942]: 2025-10-14 10:01:11.647334775 +0000 UTC m=+0.181763114 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller) Oct 14 06:01:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:01:11 localhost podman[299979]: 2025-10-14 10:01:11.745543489 +0000 UTC m=+0.079099302 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:01:11 localhost podman[299942]: 2025-10-14 10:01:11.757109569 +0000 UTC m=+0.291537918 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:01:11 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:01:11 localhost podman[299979]: 2025-10-14 10:01:11.779153111 +0000 UTC m=+0.112708944 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:01:11 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:01:12 localhost sshd[300008]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.928 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.928 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:01:12 localhost nova_compute[295778]: 2025-10-14 10:01:12.969 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:13 localhost nova_compute[295778]: 2025-10-14 10:01:13.989 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.015 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.015 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.016 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.016 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.016 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.486 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.607 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.608 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12742MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.608 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.609 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.677 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.677 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:01:14 localhost nova_compute[295778]: 2025-10-14 10:01:14.693 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:01:15 localhost nova_compute[295778]: 2025-10-14 10:01:15.143 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:01:15 localhost nova_compute[295778]: 2025-10-14 10:01:15.150 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:01:15 localhost nova_compute[295778]: 2025-10-14 10:01:15.176 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:01:15 localhost nova_compute[295778]: 2025-10-14 10:01:15.178 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:01:15 localhost nova_compute[295778]: 2025-10-14 10:01:15.179 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:01:15 localhost sshd[300054]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.094 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.095 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.095 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.110 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.110 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.111 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.111 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.112 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.112 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.112 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.113 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:01:17 localhost nova_compute[295778]: 2025-10-14 10:01:17.918 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:01:19 localhost sshd[300056]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:22 localhost sshd[300059]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:01:23 localhost podman[300061]: 2025-10-14 10:01:23.537944802 +0000 UTC m=+0.079726499 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:01:23 localhost podman[300061]: 2025-10-14 10:01:23.553217831 +0000 UTC m=+0.094999518 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009) Oct 14 06:01:23 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:01:25 localhost sshd[300080]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:01:28 localhost podman[300137]: 2025-10-14 10:01:28.556134557 +0000 UTC m=+0.079575894 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:01:28 localhost podman[300137]: 2025-10-14 10:01:28.565230202 +0000 UTC m=+0.088671509 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:01:28 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:01:28 localhost systemd[1]: tmp-crun.sC61Wj.mount: Deactivated successfully. Oct 14 06:01:28 localhost podman[300136]: 2025-10-14 10:01:28.659867008 +0000 UTC m=+0.186914142 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:01:28 localhost podman[300136]: 2025-10-14 10:01:28.669160078 +0000 UTC m=+0.196207202 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Oct 14 06:01:28 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:01:28 localhost sshd[300177]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:30 localhost podman[300258]: Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.325912691 +0000 UTC m=+0.072944857 container create 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=553) Oct 14 06:01:30 localhost systemd[1]: Started libpod-conmon-4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c.scope. Oct 14 06:01:30 localhost systemd[1]: Started libcrun container. Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.299682348 +0000 UTC m=+0.046714534 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.400757648 +0000 UTC m=+0.147789854 container init 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Oct 14 06:01:30 localhost systemd[1]: tmp-crun.5FIZKN.mount: Deactivated successfully. Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.413843758 +0000 UTC m=+0.160875914 container start 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.414317412 +0000 UTC m=+0.161349568 container attach 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, version=7, release=553, name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Oct 14 06:01:30 localhost eager_lamarr[300273]: 167 167 Oct 14 06:01:30 localhost systemd[1]: libpod-4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c.scope: Deactivated successfully. Oct 14 06:01:30 localhost podman[300258]: 2025-10-14 10:01:30.418324649 +0000 UTC m=+0.165356825 container died 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, release=553, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Oct 14 06:01:30 localhost podman[300278]: 2025-10-14 10:01:30.538099981 +0000 UTC m=+0.104033521 container remove 4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamarr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, version=7, GIT_BRANCH=main, io.openshift.expose-services=) Oct 14 06:01:30 localhost systemd[1]: libpod-conmon-4143889dcfb59f91dd5e34b05e9a7fd3ddd214e91444313641031ada700e435c.scope: Deactivated successfully. Oct 14 06:01:30 localhost podman[246584]: time="2025-10-14T10:01:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:01:30 localhost systemd[1]: Reloading. Oct 14 06:01:30 localhost podman[246584]: @ - - [14/Oct/2025:10:01:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138401 "" "Go-http-client/1.1" Oct 14 06:01:30 localhost podman[246584]: @ - - [14/Oct/2025:10:01:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17360 "" "Go-http-client/1.1" Oct 14 06:01:30 localhost systemd-rc-local-generator[300316]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:01:30 localhost systemd-sysv-generator[300324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:01:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:01:30 localhost systemd[1]: var-lib-containers-storage-overlay-ffce5e4ab7d790e741cdc0543d5bac452fb42ddcf2d125c84cee5c1595a3f58b-merged.mount: Deactivated successfully. Oct 14 06:01:31 localhost systemd[1]: Reloading. Oct 14 06:01:31 localhost systemd-rc-local-generator[300356]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:01:31 localhost systemd-sysv-generator[300361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:01:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:01:31 localhost systemd[1]: Starting Ceph mgr.np0005486731.swasqz for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 06:01:31 localhost podman[300424]: Oct 14 06:01:31 localhost podman[300424]: 2025-10-14 10:01:31.727228895 +0000 UTC m=+0.074199290 container create 4363385360cd2810e788de55bb6751cdc8b96c22d73995014e0945a9a4f05c3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:01:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f48c30ae90623269dcdfdea557ca2eebafe37488415974a94b9d027e5f43c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f48c30ae90623269dcdfdea557ca2eebafe37488415974a94b9d027e5f43c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f48c30ae90623269dcdfdea557ca2eebafe37488415974a94b9d027e5f43c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f48c30ae90623269dcdfdea557ca2eebafe37488415974a94b9d027e5f43c3/merged/var/lib/ceph/mgr/ceph-np0005486731.swasqz supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:31 localhost podman[300424]: 2025-10-14 10:01:31.77663075 +0000 UTC m=+0.123601165 container init 4363385360cd2810e788de55bb6751cdc8b96c22d73995014e0945a9a4f05c3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, release=553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git) Oct 14 06:01:31 localhost podman[300424]: 2025-10-14 10:01:31.785028625 +0000 UTC m=+0.131999050 container start 4363385360cd2810e788de55bb6751cdc8b96c22d73995014e0945a9a4f05c3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:01:31 localhost bash[300424]: 4363385360cd2810e788de55bb6751cdc8b96c22d73995014e0945a9a4f05c3a Oct 14 06:01:31 localhost podman[300424]: 2025-10-14 10:01:31.697400426 +0000 UTC m=+0.044370901 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:01:31 localhost systemd[1]: Started Ceph mgr.np0005486731.swasqz for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 06:01:31 localhost ceph-mgr[300442]: set uid:gid to 167:167 (ceph:ceph) Oct 14 06:01:31 localhost ceph-mgr[300442]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 14 06:01:31 localhost ceph-mgr[300442]: pidfile_write: ignore empty --pid-file Oct 14 06:01:31 localhost ceph-mgr[300442]: mgr[py] Loading python module 'alerts' Oct 14 06:01:31 localhost ceph-mgr[300442]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 14 06:01:31 localhost ceph-mgr[300442]: mgr[py] Loading python module 'balancer' Oct 14 06:01:31 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:31.953+0000 7fb2bbf38140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 14 06:01:32 localhost ceph-mgr[300442]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 14 06:01:32 localhost ceph-mgr[300442]: mgr[py] Loading python module 'cephadm' Oct 14 06:01:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:32.018+0000 7fb2bbf38140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 14 06:01:32 localhost sshd[300467]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:32 localhost ceph-mgr[300442]: mgr[py] Loading python module 'crash' Oct 14 06:01:32 localhost ceph-mgr[300442]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 14 06:01:32 localhost ceph-mgr[300442]: mgr[py] Loading python module 'dashboard' Oct 14 06:01:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:32.634+0000 7fb2bbf38140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'devicehealth' Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'diskprediction_local' Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:33.223+0000 7fb2bbf38140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost openstack_network_exporter[248748]: ERROR 10:01:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:01:33 localhost openstack_network_exporter[248748]: ERROR 10:01:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:01:33 localhost openstack_network_exporter[248748]: ERROR 10:01:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:01:33 localhost openstack_network_exporter[248748]: ERROR 10:01:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:01:33 localhost openstack_network_exporter[248748]: Oct 14 06:01:33 localhost openstack_network_exporter[248748]: ERROR 10:01:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:01:33 localhost openstack_network_exporter[248748]: Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: from numpy import show_config as show_numpy_config Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:33.386+0000 7fb2bbf38140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'influx' Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'insights' Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:33.448+0000 7fb2bbf38140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'iostat' Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'k8sevents' Oct 14 06:01:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:33.561+0000 7fb2bbf38140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'localpool' Oct 14 06:01:33 localhost ceph-mgr[300442]: mgr[py] Loading python module 'mds_autoscaler' Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'mirroring' Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'nfs' Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'orchestrator' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.285+0000 7fb2bbf38140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'osd_perf_query' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.425+0000 7fb2bbf38140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'osd_support' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.487+0000 7fb2bbf38140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'pg_autoscaler' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.541+0000 7fb2bbf38140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'progress' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.605+0000 7fb2bbf38140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'prometheus' Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.664+0000 7fb2bbf38140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:34.954+0000 7fb2bbf38140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 14 06:01:34 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rbd_support' Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'restful' Oct 14 06:01:35 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:35.034+0000 7fb2bbf38140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rgw' Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rook' Oct 14 06:01:35 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:35.352+0000 7fb2bbf38140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:01:35 localhost podman[300474]: 2025-10-14 10:01:35.539059964 +0000 UTC m=+0.079159174 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:01:35 localhost podman[300474]: 2025-10-14 10:01:35.552114144 +0000 UTC m=+0.092213404 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Oct 14 06:01:35 localhost sshd[300493]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:35 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'selftest' Oct 14 06:01:35 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:35.771+0000 7fb2bbf38140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:35.833+0000 7fb2bbf38140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'snap_schedule' Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'stats' Oct 14 06:01:35 localhost ceph-mgr[300442]: mgr[py] Loading python module 'status' Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Loading python module 'telegraf' Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.021+0000 7fb2bbf38140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Loading python module 'telemetry' Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.079+0000 7fb2bbf38140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Loading python module 'test_orchestrator' Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.206+0000 7fb2bbf38140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Loading python module 'volumes' Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.352+0000 7fb2bbf38140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Loading python module 'zabbix' Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.535+0000 7fb2bbf38140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:01:36.592+0000 7fb2bbf38140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 14 06:01:36 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd5600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:01:36 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3165030492 Oct 14 06:01:37 localhost systemd[1]: tmp-crun.fiKt0s.mount: Deactivated successfully. Oct 14 06:01:37 localhost podman[300623]: 2025-10-14 10:01:37.891530392 +0000 UTC m=+0.102680425 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vcs-type=git, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Oct 14 06:01:37 localhost podman[300623]: 2025-10-14 10:01:37.987156516 +0000 UTC m=+0.198306549 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:01:38 localhost sshd[300722]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:01:39 localhost podman[300724]: 2025-10-14 10:01:39.100929149 +0000 UTC m=+0.089945713 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible) Oct 14 06:01:39 localhost podman[300724]: 2025-10-14 10:01:39.113185168 +0000 UTC m=+0.102201732 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 14 06:01:39 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:01:41 localhost systemd[1]: tmp-crun.9DcJbJ.mount: Deactivated successfully. Oct 14 06:01:41 localhost podman[300833]: 2025-10-14 10:01:41.712832683 +0000 UTC m=+0.102195631 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Oct 14 06:01:41 localhost podman[300833]: 2025-10-14 10:01:41.728090092 +0000 UTC m=+0.117453020 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Oct 14 06:01:41 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:01:41 localhost podman[300907]: 2025-10-14 10:01:41.964982644 +0000 UTC m=+0.096654803 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller) Oct 14 06:01:42 localhost podman[300908]: 2025-10-14 10:01:42.045603026 +0000 UTC m=+0.173401480 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:01:42 localhost podman[300907]: 2025-10-14 10:01:42.062107799 +0000 UTC m=+0.193779948 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller) Oct 14 06:01:42 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:01:42 localhost podman[300908]: 2025-10-14 10:01:42.077104251 +0000 UTC m=+0.204902685 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:01:42 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:01:42 localhost sshd[301019]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:45 localhost sshd[301523]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:48 localhost sshd[301543]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:52 localhost sshd[301545]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:53 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd4f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:01:54 localhost podman[301548]: 2025-10-14 10:01:54.557246587 +0000 UTC m=+0.093148889 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:01:54 localhost podman[301548]: 2025-10-14 10:01:54.572037073 +0000 UTC m=+0.107939325 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 14 06:01:54 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:01:55 localhost sshd[301567]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:01:57.627 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:01:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:01:57.632 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:01:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:01:57.632 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:01:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:01:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:01:58 localhost podman[301570]: 2025-10-14 10:01:58.749001271 +0000 UTC m=+0.094569637 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:01:58 localhost podman[301570]: 2025-10-14 10:01:58.763244313 +0000 UTC m=+0.108812689 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:01:58 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:01:58 localhost sshd[301617]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:01:58 localhost podman[301587]: 2025-10-14 10:01:58.847130472 +0000 UTC m=+0.088567405 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent) Oct 14 06:01:58 localhost podman[301587]: 2025-10-14 10:01:58.855085316 +0000 UTC m=+0.096522219 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:01:58 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:01:59 localhost podman[301689]: Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.521298959 +0000 UTC m=+0.082184894 container create f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, ceph=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:01:59 localhost systemd[1]: Started libpod-conmon-f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4.scope. Oct 14 06:01:59 localhost systemd[1]: Started libcrun container. Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.487156674 +0000 UTC m=+0.048042639 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.599880246 +0000 UTC m=+0.160766181 container init f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, ceph=True, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.610859371 +0000 UTC m=+0.171745306 container start f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7) Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.611113537 +0000 UTC m=+0.171999492 container attach f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git) Oct 14 06:01:59 localhost charming_matsumoto[301704]: 167 167 Oct 14 06:01:59 localhost systemd[1]: libpod-f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4.scope: Deactivated successfully. Oct 14 06:01:59 localhost podman[301689]: 2025-10-14 10:01:59.618887756 +0000 UTC m=+0.179773751 container died f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Oct 14 06:01:59 localhost podman[301709]: 2025-10-14 10:01:59.709970838 +0000 UTC m=+0.085346959 container remove f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_matsumoto, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55) Oct 14 06:01:59 localhost systemd[1]: libpod-conmon-f0ff4e0b1f21da8987f8b0ed0498c8f5a14829332aa87b4df2e1a85d36e8c8e4.scope: Deactivated successfully. Oct 14 06:01:59 localhost systemd[1]: var-lib-containers-storage-overlay-a89eeba8efa614a23d6ce34efa9e89f7fd233dd7297e9b03a8b4b50a725142eb-merged.mount: Deactivated successfully. Oct 14 06:01:59 localhost podman[301725]: Oct 14 06:01:59 localhost podman[301725]: 2025-10-14 10:01:59.830050498 +0000 UTC m=+0.079450201 container create 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:01:59 localhost systemd[1]: Started libpod-conmon-9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9.scope. Oct 14 06:01:59 localhost systemd[1]: Started libcrun container. Oct 14 06:01:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd968f9c51d5fdac50043fa6726f9f743c168a32240b59412d43dc96cf201df/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd968f9c51d5fdac50043fa6726f9f743c168a32240b59412d43dc96cf201df/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd968f9c51d5fdac50043fa6726f9f743c168a32240b59412d43dc96cf201df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:59 localhost podman[301725]: 2025-10-14 10:01:59.796412566 +0000 UTC m=+0.045812269 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:01:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd968f9c51d5fdac50043fa6726f9f743c168a32240b59412d43dc96cf201df/merged/var/lib/ceph/mon/ceph-np0005486731 supports timestamps until 2038 (0x7fffffff) Oct 14 06:01:59 localhost podman[301725]: 2025-10-14 10:01:59.906976801 +0000 UTC m=+0.156376494 container init 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc.) Oct 14 06:01:59 localhost podman[301725]: 2025-10-14 10:01:59.916032853 +0000 UTC m=+0.165432546 container start 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=553) Oct 14 06:01:59 localhost podman[301725]: 2025-10-14 10:01:59.91628003 +0000 UTC m=+0.165679763 container attach 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Oct 14 06:02:00 localhost systemd[1]: libpod-9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9.scope: Deactivated successfully. Oct 14 06:02:00 localhost podman[301725]: 2025-10-14 10:02:00.003117818 +0000 UTC m=+0.252517521 container died 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, ceph=True, release=553, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Oct 14 06:02:00 localhost podman[301766]: 2025-10-14 10:02:00.101765883 +0000 UTC m=+0.083957972 container remove 9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_wilson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, name=rhceph, vcs-type=git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:02:00 localhost systemd[1]: libpod-conmon-9e7ac23f41f450d29810f69b7c75ae2f4c6714a24609efe81bdf569085178ba9.scope: Deactivated successfully. Oct 14 06:02:00 localhost systemd[1]: Reloading. Oct 14 06:02:00 localhost systemd-rc-local-generator[301803]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:02:00 localhost systemd-sysv-generator[301806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:02:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:02:00 localhost systemd[1]: var-lib-containers-storage-overlay-5fd968f9c51d5fdac50043fa6726f9f743c168a32240b59412d43dc96cf201df-merged.mount: Deactivated successfully. Oct 14 06:02:00 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd5080 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:02:00 localhost systemd[1]: Reloading. Oct 14 06:02:00 localhost podman[246584]: time="2025-10-14T10:02:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:02:00 localhost systemd-rc-local-generator[301845]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:02:00 localhost podman[246584]: @ - - [14/Oct/2025:10:02:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 140467 "" "Go-http-client/1.1" Oct 14 06:02:00 localhost systemd-sysv-generator[301850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:02:00 localhost podman[246584]: @ - - [14/Oct/2025:10:02:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17847 "" "Go-http-client/1.1" Oct 14 06:02:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:02:00 localhost systemd[1]: Starting Ceph mon.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 06:02:01 localhost podman[301912]: Oct 14 06:02:01 localhost podman[301912]: 2025-10-14 10:02:01.205388815 +0000 UTC m=+0.076363648 container create 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:02:01 localhost systemd[1]: tmp-crun.jCwylV.mount: Deactivated successfully. Oct 14 06:02:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b1814c433f574c7d4162875dc1d024d406d345df0a3d2dd282f4d57e62ecf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 06:02:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b1814c433f574c7d4162875dc1d024d406d345df0a3d2dd282f4d57e62ecf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:02:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b1814c433f574c7d4162875dc1d024d406d345df0a3d2dd282f4d57e62ecf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 06:02:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b1814c433f574c7d4162875dc1d024d406d345df0a3d2dd282f4d57e62ecf8/merged/var/lib/ceph/mon/ceph-np0005486731 supports timestamps until 2038 (0x7fffffff) Oct 14 06:02:01 localhost podman[301912]: 2025-10-14 10:02:01.266023181 +0000 UTC m=+0.136998004 container init 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Oct 14 06:02:01 localhost podman[301912]: 2025-10-14 10:02:01.27382436 +0000 UTC m=+0.144799193 container start 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:02:01 localhost bash[301912]: 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 Oct 14 06:02:01 localhost podman[301912]: 2025-10-14 10:02:01.174256291 +0000 UTC m=+0.045231144 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:01 localhost systemd[1]: Started Ceph mon.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 06:02:01 localhost ceph-mon[301930]: set uid:gid to 167:167 (ceph:ceph) Oct 14 06:02:01 localhost ceph-mon[301930]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 14 06:02:01 localhost ceph-mon[301930]: pidfile_write: ignore empty --pid-file Oct 14 06:02:01 localhost ceph-mon[301930]: load: jerasure load: lrc Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: RocksDB version: 7.9.2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Git sha 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: DB SUMMARY Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: DB Session ID: PP6GOKDVVBVE8Q3KEL61 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: CURRENT file: CURRENT Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: IDENTITY file: IDENTITY Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005486731/store.db dir, Total Num: 0, files: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005486731/store.db: 000004.log size: 886 ; Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.error_if_exists: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.create_if_missing: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.paranoid_checks: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.env: 0x555f3b0af9e0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.fs: PosixFileSystem Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.info_log: 0x555f3bfd0d20 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.statistics: (nil) Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.use_fsync: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_log_file_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_fallocate: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.use_direct_reads: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.create_missing_column_families: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.db_log_dir: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.wal_dir: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.advise_random_on_open: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.write_buffer_manager: 0x555f3bfe1540 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.rate_limiter: (nil) Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.unordered_write: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.row_cache: None Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.wal_filter: None Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.two_write_queues: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.manual_wal_flush: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.wal_compression: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.atomic_flush: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.log_readahead_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.db_host_id: __hostname__ Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_background_jobs: 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_background_compactions: -1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_subcompactions: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_total_wal_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_open_files: -1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bytes_per_sync: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_readahead_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_background_flushes: -1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Compression algorithms supported: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kZSTD supported: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kXpressCompression supported: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kZlibCompression supported: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005486731/store.db/MANIFEST-000005 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.merge_operator: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_filter: None Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_filter_factory: None Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.sst_partitioner_factory: None Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555f3bfd0980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x555f3bfcd350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.write_buffer_size: 33554432 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_write_buffer_number: 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression: NoCompression Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression: Disabled Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.prefix_extractor: nullptr Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.num_levels: 7 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.level: 32767 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.enabled: false Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.arena_block_size: 1048576 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.table_properties_collectors: Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.inplace_update_support: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.bloom_locality: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.max_successive_merges: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.force_consistency_checks: 1 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.ttl: 2592000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enable_blob_files: false Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.min_blob_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_file_size: 268435456 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005486731/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8075a854-41fd-4ab6-89af-6366aa1d00c3 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436121334764, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436121337660, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436121, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8075a854-41fd-4ab6-89af-6366aa1d00c3", "db_session_id": "PP6GOKDVVBVE8Q3KEL61", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436121337854, "job": 1, "event": "recovery_finished"} Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555f3bff4e00 Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: DB pointer 0x555f3c0ea000 Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731 does not exist in monmap, will attempt to join an existing cluster Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:02:01 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555f3bfcd350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,1.08 KB,0.000205636%)#012#012** File Read Latency Histogram By Level [default] ** Oct 14 06:02:01 localhost ceph-mon[301930]: using public_addr v2:172.18.0.106:0/0 -> [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] Oct 14 06:02:01 localhost ceph-mon[301930]: starting mon.np0005486731 rank -1 at public addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] at bind addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005486731 fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(???) e0 preinit fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing) e5 sync_obtain_latest_monmap Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing) e5 sync_obtain_latest_monmap obtained monmap e5 Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).mds e16 new map Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:11:54.831494+0000#012modified#0112025-10-14T10:00:48.835986+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01178#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26888}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26888 members: 26888#012[mds.mds.np0005486732.xkownj{0:26888} state up:active seq 13 addr [v2:172.18.0.107:6808/1205328170,v1:172.18.0.107:6809/1205328170] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005486733.tvstmf{-1:17244} state up:standby seq 1 addr [v2:172.18.0.108:6808/3626555326,v1:172.18.0.108:6809/3626555326] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005486731.onyaog{-1:17256} state up:standby seq 1 addr [v2:172.18.0.106:6808/799411272,v1:172.18.0.106:6809/799411272] compat {c=[1],r=[1],i=[17ff]}] Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).osd e79 crush map has features 3314933000852226048, adjusting msgr requires Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).osd e79 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).osd e79 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).osd e79 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:02:01 localhost ceph-mon[301930]: Removing key for mds.mds.np0005486730.hzolgi Oct 14 06:02:01 localhost ceph-mon[301930]: Removing daemon mds.mds.np0005486729.iznaug from np0005486729.localdomain -- ports [] Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005486729.iznaug"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005486729.iznaug"}]': finished Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Removing key for mds.mds.np0005486729.iznaug Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mgr to host np0005486731.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mgr to host np0005486732.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mgr to host np0005486733.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Saving service mgr spec with placement label:mgr Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 14 06:02:01 localhost ceph-mon[301930]: Deploying daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 14 06:02:01 localhost ceph-mon[301930]: Deploying daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486728.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486728.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 14 06:02:01 localhost ceph-mon[301930]: Deploying daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486729.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486729.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486730.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486730.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486731.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486731.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486732.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486732.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: Added label mon to host np0005486733.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: Added label _admin to host np0005486733.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Saving service mon spec with placement label:mon Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: Deploying daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486728 calling monitor election Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486728 is new leader, mons np0005486728,np0005486730,np0005486729,np0005486733 in quorum (ranks 0,1,2,3) Oct 14 06:02:01 localhost ceph-mon[301930]: overall HEALTH_OK Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:01 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:01 localhost ceph-mon[301930]: Deploying daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:02:01 localhost ceph-mon[301930]: mon.np0005486731@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Oct 14 06:02:02 localhost sshd[301969]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:03 localhost openstack_network_exporter[248748]: ERROR 10:02:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:02:03 localhost openstack_network_exporter[248748]: ERROR 10:02:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:02:03 localhost openstack_network_exporter[248748]: ERROR 10:02:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:02:03 localhost openstack_network_exporter[248748]: ERROR 10:02:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:02:03 localhost openstack_network_exporter[248748]: Oct 14 06:02:03 localhost openstack_network_exporter[248748]: ERROR 10:02:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:02:03 localhost openstack_network_exporter[248748]: Oct 14 06:02:05 localhost ceph-mds[299096]: mds.beacon.mds.np0005486731.onyaog missed beacon ack from the monitors Oct 14 06:02:05 localhost sshd[301972]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:05 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd51e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:02:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:02:06 localhost podman[301993]: 2025-10-14 10:02:06.191312285 +0000 UTC m=+0.087653031 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 14 06:02:06 localhost podman[301993]: 2025-10-14 10:02:06.206601915 +0000 UTC m=+0.102942641 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:02:06 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:02:07 localhost systemd[1]: tmp-crun.kLf2mI.mount: Deactivated successfully. Oct 14 06:02:07 localhost podman[302122]: 2025-10-14 10:02:07.287670082 +0000 UTC m=+0.103733162 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, distribution-scope=public, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:02:07 localhost podman[302122]: 2025-10-14 10:02:07.410552257 +0000 UTC m=+0.226615357 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:02:07 localhost ceph-mon[301930]: mon.np0005486731@-1(probing) e6 my rank is now 5 (was -1) Oct 14 06:02:07 localhost ceph-mon[301930]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:02:07 localhost ceph-mon[301930]: paxos.5).electionLogic(0) init, first boot, initializing epoch at 1 Oct 14 06:02:07 localhost ceph-mon[301930]: mon.np0005486731@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:08 localhost sshd[302242]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:02:09 localhost podman[302244]: 2025-10-14 10:02:09.5439001 +0000 UTC m=+0.081968020 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:02:09 localhost podman[302244]: 2025-10-14 10:02:09.553376533 +0000 UTC m=+0.091444423 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:02:09 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486732 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 is new leader, mons np0005486728,np0005486730,np0005486729,np0005486733,np0005486732 in quorum (ranks 0,1,2,3,4) Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.782570) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131782665, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 9816, "num_deletes": 254, "total_data_size": 10873277, "memory_usage": 11103024, "flush_reason": "Manual Compaction"} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 14 06:02:11 localhost ceph-mon[301930]: overall HEALTH_OK Oct 14 06:02:11 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:11 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 is new leader, mons np0005486728,np0005486730,np0005486729,np0005486733 in quorum (ranks 0,1,2,3) Oct 14 06:02:11 localhost ceph-mon[301930]: Health check failed: 2/6 mons down, quorum np0005486728,np0005486730,np0005486729,np0005486733 (MON_DOWN) Oct 14 06:02:11 localhost ceph-mon[301930]: Health detail: HEALTH_WARN 2/6 mons down, quorum np0005486728,np0005486730,np0005486729,np0005486733 Oct 14 06:02:11 localhost ceph-mon[301930]: [WRN] MON_DOWN: 2/6 mons down, quorum np0005486728,np0005486730,np0005486729,np0005486733 Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486732 (rank 4) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731 (rank 5) addr [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] is down (out of quorum) Oct 14 06:02:11 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:11 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:11 localhost ceph-mon[301930]: mgrc update_daemon_metadata mon.np0005486731 metadata {addrs=[v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005486731.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005486731.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131823623, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 9728741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 9821, "table_properties": {"data_size": 9673759, "index_size": 30108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250743, "raw_average_key_size": 26, "raw_value_size": 9510276, "raw_average_value_size": 1000, "num_data_blocks": 1161, "num_entries": 9510, "num_filter_entries": 9510, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436121, "oldest_key_time": 1760436121, "file_creation_time": 1760436131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8075a854-41fd-4ab6-89af-6366aa1d00c3", "db_session_id": "PP6GOKDVVBVE8Q3KEL61", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 41131 microseconds, and 21277 cpu microseconds. Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.823692) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 9728741 bytes OK Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.823761) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.825791) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.825822) EVENT_LOG_v1 {"time_micros": 1760436131825813, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.825849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10804599, prev total WAL file size 10821816, number of live WAL files 2. Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.828191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(9500KB) 8(2012B)] Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131828308, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 9730753, "oldest_snapshot_seqno": -1} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9260 keys, 9725415 bytes, temperature: kUnknown Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131880757, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 9725415, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9671123, "index_size": 30063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 245974, "raw_average_key_size": 26, "raw_value_size": 9510931, "raw_average_value_size": 1027, "num_data_blocks": 1160, "num_entries": 9260, "num_filter_entries": 9260, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436121, "oldest_key_time": 0, "file_creation_time": 1760436131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8075a854-41fd-4ab6-89af-6366aa1d00c3", "db_session_id": "PP6GOKDVVBVE8Q3KEL61", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.881060) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 9725415 bytes Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.882660) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.2 rd, 185.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(9.3, 0.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9515, records dropped: 255 output_compression: NoCompression Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.882689) EVENT_LOG_v1 {"time_micros": 1760436131882677, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52543, "compaction_time_cpu_micros": 30411, "output_level": 6, "num_output_files": 1, "total_output_size": 9725415, "num_input_records": 9515, "num_output_records": 9260, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131884425, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436131884483, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 14 06:02:11 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:02:11.828085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486732 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486731 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:11 localhost ceph-mon[301930]: mon.np0005486728 is new leader, mons np0005486728,np0005486730,np0005486729,np0005486733,np0005486732,np0005486731 in quorum (ranks 0,1,2,3,4,5) Oct 14 06:02:11 localhost ceph-mon[301930]: Health check cleared: MON_DOWN (was: 2/6 mons down, quorum np0005486728,np0005486730,np0005486729,np0005486733) Oct 14 06:02:11 localhost ceph-mon[301930]: Cluster is now healthy Oct 14 06:02:11 localhost ceph-mon[301930]: overall HEALTH_OK Oct 14 06:02:11 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:02:12 localhost podman[302348]: 2025-10-14 10:02:12.0542359 +0000 UTC m=+0.088093883 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:02:12 localhost podman[302348]: 2025-10-14 10:02:12.068984665 +0000 UTC m=+0.102842608 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:02:12 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:02:12 localhost sshd[302427]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:12 localhost systemd[1]: tmp-crun.IjB3Ax.mount: Deactivated successfully. Oct 14 06:02:12 localhost podman[302400]: 2025-10-14 10:02:12.206094762 +0000 UTC m=+0.095815460 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible) Oct 14 06:02:12 localhost podman[302401]: 2025-10-14 10:02:12.277851996 +0000 UTC m=+0.162040336 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:02:12 localhost podman[302400]: 2025-10-14 10:02:12.303713969 +0000 UTC m=+0.193434697 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:02:12 localhost podman[302401]: 2025-10-14 10:02:12.315329721 +0000 UTC m=+0.199518041 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:02:12 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:02:12 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:12 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.936 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.937 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.937 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.938 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:02:13 localhost nova_compute[295778]: 2025-10-14 10:02:13.938 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:02:14 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:14 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:14 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:14 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:14 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:14 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:14 localhost nova_compute[295778]: 2025-10-14 10:02:14.386 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:02:14 localhost nova_compute[295778]: 2025-10-14 10:02:14.581 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:02:14 localhost nova_compute[295778]: 2025-10-14 10:02:14.583 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12255MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:02:14 localhost nova_compute[295778]: 2025-10-14 10:02:14.583 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:02:14 localhost nova_compute[295778]: 2025-10-14 10:02:14.584 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.122 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.122 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:02:15 localhost ceph-mon[301930]: Reconfiguring mon.np0005486728 (monmap changed)... Oct 14 06:02:15 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486728 on np0005486728.localdomain Oct 14 06:02:15 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:15 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:15 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486728.giajub", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.416 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:02:15 localhost sshd[302756]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.682 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.683 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.704 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.722 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:02:15 localhost nova_compute[295778]: 2025-10-14 10:02:15.746 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:02:16 localhost nova_compute[295778]: 2025-10-14 10:02:16.202 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:02:16 localhost nova_compute[295778]: 2025-10-14 10:02:16.208 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:02:16 localhost nova_compute[295778]: 2025-10-14 10:02:16.235 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:02:16 localhost nova_compute[295778]: 2025-10-14 10:02:16.237 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:02:16 localhost nova_compute[295778]: 2025-10-14 10:02:16.238 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:02:16 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486728.giajub (monmap changed)... Oct 14 06:02:16 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486728.giajub on np0005486728.localdomain Oct 14 06:02:16 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:16 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:16 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486728.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:16 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:17 localhost ceph-mon[301930]: Reconfiguring crash.np0005486728 (monmap changed)... Oct 14 06:02:17 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486728 on np0005486728.localdomain Oct 14 06:02:17 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:17 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:17 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486729.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.235 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.270 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.270 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.270 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.283 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.284 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.284 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.285 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.285 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.286 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost ceph-mon[301930]: Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:02:18 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:02:18 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:18 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:18 localhost ceph-mon[301930]: Reconfiguring mon.np0005486729 (monmap changed)... Oct 14 06:02:18 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486729 on np0005486729.localdomain Oct 14 06:02:18 localhost sshd[302780]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e79 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e79 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e80 e80: 6 total, 6 up, 6 in Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486728"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486728"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486729"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486729"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486733.tvstmf"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mds metadata", "who": "mds.np0005486733.tvstmf"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).mds e16 all = 0 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486731.onyaog"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mds metadata", "who": "mds.np0005486731.onyaog"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).mds e16 all = 0 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486732.xkownj"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mds metadata", "who": "mds.np0005486732.xkownj"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).mds e16 all = 0 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486730.ddfidc", "id": "np0005486730.ddfidc"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486730.ddfidc", "id": "np0005486730.ddfidc"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486731.swasqz", "id": "np0005486731.swasqz"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486731.swasqz", "id": "np0005486731.swasqz"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486732.pasqzz", "id": "np0005486732.pasqzz"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486732.pasqzz", "id": "np0005486732.pasqzz"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486733.primvu", "id": "np0005486733.primvu"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486733.primvu", "id": "np0005486733.primvu"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486729.xpybho", "id": "np0005486729.xpybho"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486729.xpybho", "id": "np0005486729.xpybho"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 0} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 1} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 2} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 3} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 4} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mds metadata"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mds metadata"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon).mds e16 all = 1 Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd metadata"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd metadata"} : dispatch Oct 14 06:02:18 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon metadata"} v 0) Oct 14 06:02:18 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon metadata"} : dispatch Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:02:18 localhost nova_compute[295778]: 2025-10-14 10:02:18.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:02:18 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd[1]: session-21.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-19.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-24.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-14.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-18.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd[1]: session-20.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd[1]: session-26.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-26.scope: Consumed 3min 35.993s CPU time. Oct 14 06:02:18 localhost systemd[1]: session-16.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd[1]: session-22.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd[1]: session-23.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 19. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 21. Oct 14 06:02:18 localhost systemd[1]: session-25.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 24. Oct 14 06:02:18 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 14. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 18. Oct 14 06:02:18 localhost systemd[1]: session-17.scope: Deactivated successfully. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 20. Oct 14 06:02:18 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 26. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 16. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 22. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 23. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 25. Oct 14 06:02:18 localhost systemd-logind[760]: Removed session 17. Oct 14 06:02:19 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/mirror_snapshot_schedule"} v 0) Oct 14 06:02:19 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/mirror_snapshot_schedule"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/trash_purge_schedule"} v 0) Oct 14 06:02:19 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/trash_purge_schedule"} : dispatch Oct 14 06:02:19 localhost sshd[302782]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:19 localhost systemd-logind[760]: New session 67 of user ceph-admin. Oct 14 06:02:19 localhost systemd[1]: Started Session 67 of User ceph-admin. Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' Oct 14 06:02:19 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14120 172.18.0.103:0/1668635823' entity='mgr.np0005486728.giajub' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:02:19 localhost ceph-mon[301930]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: Activating manager daemon np0005486730.ddfidc Oct 14 06:02:19 localhost ceph-mon[301930]: from='client.? 172.18.0.103:0/3139066435' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:02:19 localhost ceph-mon[301930]: Manager daemon np0005486730.ddfidc is now available Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/mirror_snapshot_schedule"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/mirror_snapshot_schedule"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/trash_purge_schedule"} : dispatch Oct 14 06:02:19 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486730.ddfidc/trash_purge_schedule"} : dispatch Oct 14 06:02:20 localhost podman[302897]: 2025-10-14 10:02:20.407791808 +0000 UTC m=+0.094480275 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:02:20 localhost podman[302897]: 2025-10-14 10:02:20.531433163 +0000 UTC m=+0.218121610 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, ceph=True) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain.devices.0}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain.devices.0}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: [14/Oct/2025:10:02:20] ENGINE Bus STARTING Oct 14 06:02:20 localhost ceph-mon[301930]: [14/Oct/2025:10:02:20] ENGINE Serving on https://172.18.0.105:7150 Oct 14 06:02:20 localhost ceph-mon[301930]: [14/Oct/2025:10:02:20] ENGINE Client ('172.18.0.105', 34948) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:02:20 localhost ceph-mon[301930]: [14/Oct/2025:10:02:20] ENGINE Serving on http://172.18.0.105:8765 Oct 14 06:02:20 localhost ceph-mon[301930]: [14/Oct/2025:10:02:20] ENGINE Bus STARTED Oct 14 06:02:20 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:20 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:20 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:20 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:20 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:02:21 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:21 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:02:21 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:21 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:02:21 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e80 _set_new_cache_sizes cache_size:1019830776 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:21 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:22 localhost sshd[303083]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005486728", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486728", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:22 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:02:22 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486728", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486728", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:02:23 localhost ceph-mon[301930]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:02:23 localhost ceph-mon[301930]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:02:23 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:23 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:02:24 localhost systemd[1]: tmp-crun.vtr8Qb.mount: Deactivated successfully. Oct 14 06:02:24 localhost podman[303551]: 2025-10-14 10:02:24.722950251 +0000 UTC m=+0.084807954 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:02:24 localhost podman[303551]: 2025-10-14 10:02:24.761225818 +0000 UTC m=+0.123083521 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:02:24 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:02:24 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486728.giajub", "id": "np0005486728.giajub"} v 0) Oct 14 06:02:24 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr metadata", "who": "np0005486728.giajub", "id": "np0005486728.giajub"} : dispatch Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486728.localdomain}] v 0) Oct 14 06:02:25 localhost sshd[303763]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:02:25 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: Updating np0005486728.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:25 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:26 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:02:26 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:26 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:02:26 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:02:26 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:26 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:26 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e80 _set_new_cache_sizes cache_size:1020050799 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:26 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:26 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:26 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:26 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:26 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:27 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain.devices.0}] v 0) Oct 14 06:02:27 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486729.localdomain}] v 0) Oct 14 06:02:27 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 14 06:02:27 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:27 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 14 06:02:27 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 14 06:02:27 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:27 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:27 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:02:27 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:02:27 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:27 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:27 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:02:28 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:02:28 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:28 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:28 localhost sshd[303819]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:28 localhost ceph-mon[301930]: Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:02:28 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:02:28 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:28 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:28 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:02:28 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:28 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:28 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:02:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:02:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:02:28 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:29 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:02:29 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:29 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:02:29 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:29 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:29 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:29 localhost systemd[1]: tmp-crun.YdytRC.mount: Deactivated successfully. Oct 14 06:02:29 localhost podman[303821]: 2025-10-14 10:02:29.093751987 +0000 UTC m=+0.109087476 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Oct 14 06:02:29 localhost podman[303821]: 2025-10-14 10:02:29.131190081 +0000 UTC m=+0.146525580 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:02:29 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:02:29 localhost podman[303822]: 2025-10-14 10:02:29.133763511 +0000 UTC m=+0.147536778 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:02:29 localhost podman[303822]: 2025-10-14 10:02:29.21318556 +0000 UTC m=+0.226958787 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:02:29 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:02:30 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:30 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:30 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:30 localhost ceph-mon[301930]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:02:30 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:30 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:30 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:02:30 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:02:30 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:02:30 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:02:30 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:30 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:30 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:30 localhost podman[246584]: time="2025-10-14T10:02:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:02:30 localhost podman[246584]: @ - - [14/Oct/2025:10:02:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:02:30 localhost podman[246584]: @ - - [14/Oct/2025:10:02:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18332 "" "Go-http-client/1.1" Oct 14 06:02:30 localhost podman[303918]: Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.843469373 +0000 UTC m=+0.081943118 container create f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64) Oct 14 06:02:30 localhost systemd[1]: Started libpod-conmon-f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3.scope. Oct 14 06:02:30 localhost systemd[1]: Started libcrun container. Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.811964929 +0000 UTC m=+0.050438674 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.921506756 +0000 UTC m=+0.159980451 container init f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public) Oct 14 06:02:30 localhost systemd[1]: tmp-crun.zoi8uy.mount: Deactivated successfully. Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.934619077 +0000 UTC m=+0.173092782 container start f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553) Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.934945655 +0000 UTC m=+0.173419390 container attach f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:02:30 localhost determined_joliot[303933]: 167 167 Oct 14 06:02:30 localhost systemd[1]: libpod-f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3.scope: Deactivated successfully. Oct 14 06:02:30 localhost podman[303918]: 2025-10-14 10:02:30.941618685 +0000 UTC m=+0.180092390 container died f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, release=553, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:02:31 localhost podman[303938]: 2025-10-14 10:02:31.047897515 +0000 UTC m=+0.090037656 container remove f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_joliot, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, release=553, version=7) Oct 14 06:02:31 localhost systemd[1]: libpod-conmon-f206417a93ebde7420fabaf348a31ceae08e22598b280c3421cd5e953e761fa3.scope: Deactivated successfully. Oct 14 06:02:31 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:31 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:31 localhost ceph-mon[301930]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:02:31 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:31 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:31 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:02:31 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:31 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:31 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 14 06:02:31 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:02:31 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:31 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:31 localhost ceph-mon[301930]: mon.np0005486731@5(peon).osd e80 _set_new_cache_sizes cache_size:1020054662 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:31 localhost podman[304008]: Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.786679634 +0000 UTC m=+0.081895588 container create 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, release=553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:02:31 localhost systemd[1]: Started libpod-conmon-89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990.scope. Oct 14 06:02:31 localhost systemd[1]: Started libcrun container. Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.848537593 +0000 UTC m=+0.143753537 container init 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:02:31 localhost systemd[1]: var-lib-containers-storage-overlay-cbcb89fb0951591fa3a3c5ba9eafacfb4060a25308021642b084007b60f2c96b-merged.mount: Deactivated successfully. Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.753216697 +0000 UTC m=+0.048432671 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.85929711 +0000 UTC m=+0.154513054 container start 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.859543448 +0000 UTC m=+0.154759402 container attach 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 14 06:02:31 localhost fervent_lovelace[304023]: 167 167 Oct 14 06:02:31 localhost systemd[1]: libpod-89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990.scope: Deactivated successfully. Oct 14 06:02:31 localhost podman[304008]: 2025-10-14 10:02:31.863018001 +0000 UTC m=+0.158233975 container died 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, name=rhceph) Oct 14 06:02:31 localhost systemd[1]: var-lib-containers-storage-overlay-a09ab35a884c6530939f50420f317c825dfdfe99e314f501eb7644614b6b4d47-merged.mount: Deactivated successfully. Oct 14 06:02:31 localhost podman[304028]: 2025-10-14 10:02:31.963585457 +0000 UTC m=+0.087404195 container remove 89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_lovelace, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, release=553, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:02:31 localhost systemd[1]: libpod-conmon-89c4093790044dbc00ef255da94a70355a50450b98e82f90c69949f2b80f8990.scope: Deactivated successfully. Oct 14 06:02:32 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:32 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:32 localhost ceph-mon[301930]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:02:32 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:02:32 localhost ceph-mon[301930]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:02:32 localhost sshd[304050]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:32 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:32 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:32 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 14 06:02:32 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:02:32 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:32 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:32 localhost podman[304104]: Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.844507008 +0000 UTC m=+0.082163974 container create 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:02:32 localhost systemd[1]: Started libpod-conmon-9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81.scope. Oct 14 06:02:32 localhost systemd[1]: Started libcrun container. Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.812598233 +0000 UTC m=+0.050255229 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.913305243 +0000 UTC m=+0.150962219 container init 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git) Oct 14 06:02:32 localhost frosty_cannon[304119]: 167 167 Oct 14 06:02:32 localhost systemd[1]: libpod-9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81.scope: Deactivated successfully. Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.926578758 +0000 UTC m=+0.164235734 container start 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.926938719 +0000 UTC m=+0.164595755 container attach 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Oct 14 06:02:32 localhost podman[304104]: 2025-10-14 10:02:32.929561049 +0000 UTC m=+0.167218045 container died 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, architecture=x86_64, distribution-scope=public, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:02:33 localhost podman[304125]: 2025-10-14 10:02:33.030984248 +0000 UTC m=+0.085764661 container remove 9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_cannon, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Oct 14 06:02:33 localhost systemd[1]: libpod-conmon-9d3ea10f9bda01ba7c87b99d7595b5f0ee79621e1b239cbabbbce0b93c3d1c81.scope: Deactivated successfully. Oct 14 06:02:33 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:33 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:33 localhost ceph-mon[301930]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:02:33 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:02:33 localhost ceph-mon[301930]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:02:33 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:33 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:33 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:02:33 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:02:33 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:33 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:33 localhost openstack_network_exporter[248748]: ERROR 10:02:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:02:33 localhost openstack_network_exporter[248748]: ERROR 10:02:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:02:33 localhost openstack_network_exporter[248748]: ERROR 10:02:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:02:33 localhost openstack_network_exporter[248748]: ERROR 10:02:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:02:33 localhost openstack_network_exporter[248748]: Oct 14 06:02:33 localhost openstack_network_exporter[248748]: ERROR 10:02:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:02:33 localhost openstack_network_exporter[248748]: Oct 14 06:02:33 localhost systemd[1]: var-lib-containers-storage-overlay-6446c3de951a73785470cd97a1a66e18b165318f0bd700cd94436f9dc0e4d913-merged.mount: Deactivated successfully. Oct 14 06:02:33 localhost podman[304201]: Oct 14 06:02:33 localhost podman[304201]: 2025-10-14 10:02:33.941192274 +0000 UTC m=+0.084595930 container create 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Oct 14 06:02:33 localhost systemd[1]: Started libpod-conmon-63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5.scope. Oct 14 06:02:34 localhost systemd[1]: Started libcrun container. Oct 14 06:02:34 localhost podman[304201]: 2025-10-14 10:02:33.909056702 +0000 UTC m=+0.052460438 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:34 localhost podman[304201]: 2025-10-14 10:02:34.021311912 +0000 UTC m=+0.164715568 container init 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, RELEASE=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:02:34 localhost podman[304201]: 2025-10-14 10:02:34.031404923 +0000 UTC m=+0.174808579 container start 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Oct 14 06:02:34 localhost podman[304201]: 2025-10-14 10:02:34.031808694 +0000 UTC m=+0.175212350 container attach 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, release=553) Oct 14 06:02:34 localhost youthful_jennings[304217]: 167 167 Oct 14 06:02:34 localhost podman[304201]: 2025-10-14 10:02:34.035896483 +0000 UTC m=+0.179300199 container died 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-type=git, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:02:34 localhost systemd[1]: libpod-63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5.scope: Deactivated successfully. Oct 14 06:02:34 localhost podman[304222]: 2025-10-14 10:02:34.135319529 +0000 UTC m=+0.085626917 container remove 63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_jennings, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:02:34 localhost systemd[1]: libpod-conmon-63898cd9f08b8da78339a6c9503769fee6b4b3c827659ad6237889333d079aa5.scope: Deactivated successfully. Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:34 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:34 localhost ceph-mon[301930]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:02:34 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: from='mgr.14184 ' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "quorum_status"} v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "quorum_status"} : dispatch Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e6 handle_command mon_command({"prefix": "mon rm", "name": "np0005486728"} v 0) Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon rm", "name": "np0005486728"} : dispatch Oct 14 06:02:34 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd51e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:02:34 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Oct 14 06:02:34 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@5(peon) e7 my rank is now 4 (was 5) Oct 14 06:02:34 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.106:3300/0 Oct 14 06:02:34 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.106:3300/0 Oct 14 06:02:34 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd5600 mon_map magic: 0 from mon.4 v2:172.18.0.106:3300/0 Oct 14 06:02:34 localhost ceph-mon[301930]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:02:34 localhost ceph-mon[301930]: paxos.4).electionLogic(28) init, last seen epoch 28 Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:34 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:34 localhost systemd[1]: var-lib-containers-storage-overlay-a77aac86f3e9288386f930f10061a10e10f9bfa928b2aab05ce86ecc7e011fba-merged.mount: Deactivated successfully. Oct 14 06:02:34 localhost podman[304292]: Oct 14 06:02:34 localhost podman[304292]: 2025-10-14 10:02:34.961451451 +0000 UTC m=+0.085969367 container create 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container) Oct 14 06:02:34 localhost systemd[1]: Started libpod-conmon-0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2.scope. Oct 14 06:02:35 localhost systemd[1]: Started libcrun container. Oct 14 06:02:35 localhost podman[304292]: 2025-10-14 10:02:35.018373567 +0000 UTC m=+0.142891423 container init 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64, vcs-type=git, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Oct 14 06:02:35 localhost podman[304292]: 2025-10-14 10:02:34.925814845 +0000 UTC m=+0.050332751 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:35 localhost podman[304292]: 2025-10-14 10:02:35.030755719 +0000 UTC m=+0.155273575 container start 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 06:02:35 localhost podman[304292]: 2025-10-14 10:02:35.031015016 +0000 UTC m=+0.155532912 container attach 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, version=7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:02:35 localhost sweet_meitner[304307]: 167 167 Oct 14 06:02:35 localhost systemd[1]: libpod-0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2.scope: Deactivated successfully. Oct 14 06:02:35 localhost podman[304292]: 2025-10-14 10:02:35.034290584 +0000 UTC m=+0.158808470 container died 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, RELEASE=main) Oct 14 06:02:35 localhost podman[304312]: 2025-10-14 10:02:35.143773559 +0000 UTC m=+0.094521595 container remove 0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_meitner, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True) Oct 14 06:02:35 localhost systemd[1]: libpod-conmon-0f0eb5b87b9c4786c21696695f4db2164700cc5902d76d4c6f32a90dc53b8bd2.scope: Deactivated successfully. Oct 14 06:02:35 localhost sshd[304328]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:35 localhost systemd[1]: var-lib-containers-storage-overlay-e62c67dab097033a8ada8bcc4ff6bc3ac5ac02ed7edc76aa0aec393f19bcc2a0-merged.mount: Deactivated successfully. Oct 14 06:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:02:36 localhost podman[304331]: 2025-10-14 10:02:36.557591419 +0000 UTC m=+0.091054173 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:02:36 localhost podman[304331]: 2025-10-14 10:02:36.570668949 +0000 UTC m=+0.104131673 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:02:36 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:02:39 localhost sshd[304351]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 handle_timecheck drop unexpected msg Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486731@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486731@4(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:02:39 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:39 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:02:39 localhost ceph-mon[301930]: Remove daemons mon.np0005486728 Oct 14 06:02:39 localhost ceph-mon[301930]: Safe to remove mon.np0005486728: new quorum should be ['np0005486730', 'np0005486729', 'np0005486733', 'np0005486732', 'np0005486731'] (from ['np0005486730', 'np0005486729', 'np0005486733', 'np0005486732', 'np0005486731']) Oct 14 06:02:39 localhost ceph-mon[301930]: Removing monitor np0005486728 from monmap... Oct 14 06:02:39 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "mon rm", "name": "np0005486728"} : dispatch Oct 14 06:02:39 localhost ceph-mon[301930]: Removing daemon mon.np0005486728 from np0005486728.localdomain -- ports [] Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486731 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486732 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486730 is new leader, mons np0005486730,np0005486729,np0005486733,np0005486731 in quorum (ranks 0,1,2,4) Oct 14 06:02:39 localhost ceph-mon[301930]: overall HEALTH_OK Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486730 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486733 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486729 calling monitor election Oct 14 06:02:39 localhost ceph-mon[301930]: mon.np0005486730 is new leader, mons np0005486730,np0005486729,np0005486733,np0005486732,np0005486731 in quorum (ranks 0,1,2,3,4) Oct 14 06:02:39 localhost ceph-mon[301930]: overall HEALTH_OK Oct 14 06:02:39 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:39 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:39 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:02:39 localhost systemd[1]: tmp-crun.g6Vch2.mount: Deactivated successfully. Oct 14 06:02:39 localhost podman[304371]: 2025-10-14 10:02:39.965806894 +0000 UTC m=+0.101100792 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:02:39 localhost podman[304371]: 2025-10-14 10:02:39.982123751 +0000 UTC m=+0.117417639 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:02:39 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:02:40 localhost podman[304425]: Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.392936117 +0000 UTC m=+0.084385204 container create 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git) Oct 14 06:02:40 localhost systemd[1]: Started libpod-conmon-9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255.scope. Oct 14 06:02:40 localhost systemd[1]: Started libcrun container. Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.358824462 +0000 UTC m=+0.050273599 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.462698228 +0000 UTC m=+0.154147315 container init 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55) Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.471519574 +0000 UTC m=+0.162968651 container start 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.471703719 +0000 UTC m=+0.163152796 container attach 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, version=7, architecture=x86_64, ceph=True, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Oct 14 06:02:40 localhost festive_lovelace[304440]: 167 167 Oct 14 06:02:40 localhost systemd[1]: libpod-9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255.scope: Deactivated successfully. Oct 14 06:02:40 localhost podman[304425]: 2025-10-14 10:02:40.478393758 +0000 UTC m=+0.169842885 container died 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, version=7, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:02:40 localhost podman[304445]: 2025-10-14 10:02:40.574479784 +0000 UTC m=+0.088953055 container remove 9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_lovelace, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:02:40 localhost systemd[1]: libpod-conmon-9ff400b2cf5cf7dc38dc729170dc3d81b15765ea8f11e553ef73486fcc0be255.scope: Deactivated successfully. Oct 14 06:02:40 localhost ceph-mon[301930]: Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:02:40 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:02:40 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:40 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:40 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:40 localhost systemd[1]: var-lib-containers-storage-overlay-e13578f01a9f5e76d5abc5dc48f8023a09496a62c6a81b32c489caecc117eb8f-merged.mount: Deactivated successfully. Oct 14 06:02:41 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:02:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5748 writes, 25K keys, 5748 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5748 writes, 751 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 80 writes, 203 keys, 80 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 80 writes, 38 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:02:41 localhost ceph-mon[301930]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:02:41 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:02:41 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:41 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:41 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:41 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:02:42 localhost sshd[304461]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:02:42 localhost podman[304464]: 2025-10-14 10:02:42.547788106 +0000 UTC m=+0.083184162 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 14 06:02:42 localhost systemd[1]: tmp-crun.irp7AL.mount: Deactivated successfully. Oct 14 06:02:42 localhost podman[304464]: 2025-10-14 10:02:42.65126582 +0000 UTC m=+0.186661906 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 06:02:42 localhost systemd[1]: tmp-crun.qu8Wfn.mount: Deactivated successfully. Oct 14 06:02:42 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:02:42 localhost podman[304465]: 2025-10-14 10:02:42.67287629 +0000 UTC m=+0.203552209 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:02:42 localhost podman[304463]: 2025-10-14 10:02:42.627677317 +0000 UTC m=+0.162801756 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Oct 14 06:02:42 localhost podman[304463]: 2025-10-14 10:02:42.712233555 +0000 UTC m=+0.247358004 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.) Oct 14 06:02:42 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:02:42 localhost podman[304465]: 2025-10-14 10:02:42.73254145 +0000 UTC m=+0.263217299 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:02:42 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:02:42 localhost ceph-mon[301930]: Removed label mon from host np0005486728.localdomain Oct 14 06:02:42 localhost ceph-mon[301930]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:02:42 localhost ceph-mon[301930]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:02:42 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:42 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:42 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:42 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:02:43 localhost ceph-mon[301930]: Removed label mgr from host np0005486728.localdomain Oct 14 06:02:43 localhost ceph-mon[301930]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:02:43 localhost ceph-mon[301930]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:02:43 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:43 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:43 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:02:43 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:44 localhost ceph-mon[301930]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:02:44 localhost ceph-mon[301930]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:02:44 localhost ceph-mon[301930]: Removed label _admin from host np0005486728.localdomain Oct 14 06:02:44 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:44 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:44 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:45 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:02:45 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:02:45 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:45 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:45 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:45 localhost sshd[304531]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:02:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 4975 writes, 22K keys, 4975 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4975 writes, 716 syncs, 6.95 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 126 writes, 388 keys, 126 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 126 writes, 52 syncs, 2.42 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:02:46 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:46 localhost ceph-mon[301930]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:02:46 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:02:46 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:46 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:46 localhost ceph-mon[301930]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:02:46 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:46 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:02:48 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:48 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:48 localhost ceph-mon[301930]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:02:48 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:02:48 localhost ceph-mon[301930]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:02:48 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:48 localhost ceph-mon[301930]: mon.np0005486731@4(peon) e7 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:02:48 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3405467654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:02:48 localhost ceph-mon[301930]: mon.np0005486731@4(peon) e7 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:02:48 localhost ceph-mon[301930]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3405467654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:02:49 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:49 localhost ceph-mon[301930]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:02:49 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:02:49 localhost ceph-mon[301930]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:02:49 localhost sshd[304533]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:02:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:50 localhost ceph-mon[301930]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:02:50 localhost ceph-mon[301930]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:50 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:02:51 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:51 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:02:51 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:02:51 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:51 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:51 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:02:52 localhost ceph-mon[301930]: Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:02:52 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:02:52 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:52 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:52 localhost sshd[304536]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:54 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:54 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:54 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:54 localhost ceph-mon[301930]: Removing np0005486728.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:02:54 localhost ceph-mon[301930]: Removing np0005486728.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:02:54 localhost ceph-mon[301930]: Removing np0005486728.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:02:54 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:54 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:02:54 localhost podman[304858]: 2025-10-14 10:02:54.977048466 +0000 UTC m=+0.090877908 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:02:55 localhost podman[304858]: 2025-10-14 10:02:55.01746177 +0000 UTC m=+0.131291132 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:02:55 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:02:55 localhost ceph-mon[301930]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:55 localhost ceph-mon[301930]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:55 localhost ceph-mon[301930]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:55 localhost ceph-mon[301930]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:55 localhost ceph-mon[301930]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:55 localhost sshd[304878]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:56 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:02:56 localhost ceph-mon[301930]: Removing daemon mgr.np0005486728.giajub from np0005486728.localdomain -- ports [9283, 8765] Oct 14 06:02:56 localhost ceph-mon[301930]: Added label _no_schedule to host np0005486728.localdomain Oct 14 06:02:56 localhost ceph-mon[301930]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005486728.localdomain Oct 14 06:02:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:02:57.627 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:02:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:02:57.627 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:02:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:02:57.628 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:02:57 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth rm", "entity": "mgr.np0005486728.giajub"} : dispatch Oct 14 06:02:57 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005486728.giajub"}]': finished Oct 14 06:02:57 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:57 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:58 localhost ceph-mon[301930]: Removing key for mgr.np0005486728.giajub Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain"} : dispatch Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain"}]': finished Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:58 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486729.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:02:59 localhost sshd[304917]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:02:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:02:59 localhost podman[304920]: 2025-10-14 10:02:59.547736912 +0000 UTC m=+0.084121037 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:02:59 localhost podman[304920]: 2025-10-14 10:02:59.557201516 +0000 UTC m=+0.093585681 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:02:59 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:02:59 localhost sshd[304955]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:02:59 localhost systemd[1]: tmp-crun.AhSgk4.mount: Deactivated successfully. Oct 14 06:02:59 localhost podman[304919]: 2025-10-14 10:02:59.650288592 +0000 UTC m=+0.187893309 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:02:59 localhost podman[304919]: 2025-10-14 10:02:59.655710007 +0000 UTC m=+0.193314724 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:02:59 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:02:59 localhost systemd-logind[760]: New session 68 of user tripleo-admin. Oct 14 06:02:59 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 14 06:02:59 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 14 06:02:59 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 14 06:02:59 localhost systemd[1]: Starting User Manager for UID 1003... Oct 14 06:02:59 localhost ceph-mon[301930]: host np0005486728.localdomain `cephadm ls` failed: Cannot decode JSON: #012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 1540, in _run_cephadm_json#012 return json.loads(''.join(out))#012 File "/lib64/python3.9/json/__init__.py", line 346, in loads#012 return _default_decoder.decode(s)#012 File "/lib64/python3.9/json/decoder.py", line 337, in decode#012 obj, end = self.raw_decode(s, idx=_w(s, 0).end())#012 File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode#012 raise JSONDecodeError("Expecting value", s, err.value) from None#012json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Oct 14 06:02:59 localhost ceph-mon[301930]: Removed host np0005486728.localdomain Oct 14 06:02:59 localhost ceph-mon[301930]: executing refresh((['np0005486728.localdomain', 'np0005486729.localdomain', 'np0005486730.localdomain', 'np0005486731.localdomain', 'np0005486732.localdomain', 'np0005486733.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005486728.localdomain does not exist Oct 14 06:02:59 localhost ceph-mon[301930]: Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:02:59 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:02:59 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:59 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:59 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:02:59 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:03:00 localhost systemd[304964]: Queued start job for default target Main User Target. Oct 14 06:03:00 localhost systemd[304964]: Created slice User Application Slice. Oct 14 06:03:00 localhost systemd[304964]: Started Mark boot as successful after the user session has run 2 minutes. Oct 14 06:03:00 localhost systemd[304964]: Started Daily Cleanup of User's Temporary Directories. Oct 14 06:03:00 localhost systemd[304964]: Reached target Paths. Oct 14 06:03:00 localhost systemd[304964]: Reached target Timers. Oct 14 06:03:00 localhost systemd[304964]: Starting D-Bus User Message Bus Socket... Oct 14 06:03:00 localhost systemd[304964]: Starting Create User's Volatile Files and Directories... Oct 14 06:03:00 localhost systemd[304964]: Finished Create User's Volatile Files and Directories. Oct 14 06:03:00 localhost systemd[304964]: Listening on D-Bus User Message Bus Socket. Oct 14 06:03:00 localhost systemd[304964]: Reached target Sockets. Oct 14 06:03:00 localhost systemd[304964]: Reached target Basic System. Oct 14 06:03:00 localhost systemd[304964]: Reached target Main User Target. Oct 14 06:03:00 localhost systemd[304964]: Startup finished in 180ms. Oct 14 06:03:00 localhost systemd[1]: Started User Manager for UID 1003. Oct 14 06:03:00 localhost systemd[1]: Started Session 68 of User tripleo-admin. Oct 14 06:03:00 localhost podman[246584]: time="2025-10-14T10:03:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:03:00 localhost podman[246584]: @ - - [14/Oct/2025:10:03:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:03:00 localhost podman[246584]: @ - - [14/Oct/2025:10:03:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18326 "" "Go-http-client/1.1" Oct 14 06:03:00 localhost python3[305106]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.103/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 14 06:03:00 localhost ceph-mon[301930]: Reconfiguring mon.np0005486729 (monmap changed)... Oct 14 06:03:00 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486729 on np0005486729.localdomain Oct 14 06:03:00 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:00 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:00 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:01 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:01 localhost python3[305253]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.103/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 06:03:01 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:03:01 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:03:01 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:01 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:01 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:03:02 localhost python3[305398]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.103 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 06:03:02 localhost sshd[305400]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:02 localhost ceph-mon[301930]: Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:03:02 localhost ceph-mon[301930]: Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:03:02 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:02 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:02 localhost ceph-mon[301930]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:03:02 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:02 localhost ceph-mon[301930]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:03:03 localhost openstack_network_exporter[248748]: ERROR 10:03:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:03:03 localhost openstack_network_exporter[248748]: ERROR 10:03:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:03:03 localhost openstack_network_exporter[248748]: ERROR 10:03:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:03:03 localhost openstack_network_exporter[248748]: ERROR 10:03:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:03:03 localhost openstack_network_exporter[248748]: Oct 14 06:03:03 localhost openstack_network_exporter[248748]: ERROR 10:03:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:03:03 localhost openstack_network_exporter[248748]: Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:04 localhost ceph-mon[301930]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:04 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:04 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:04 localhost podman[305473]: Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.599307622 +0000 UTC m=+0.080965272 container create 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, maintainer=Guillaume Abrioux , release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7) Oct 14 06:03:04 localhost systemd[1]: Started libpod-conmon-6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0.scope. Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.566911203 +0000 UTC m=+0.048568903 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:04 localhost systemd[1]: Started libcrun container. Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.690159698 +0000 UTC m=+0.171817348 container init 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, name=rhceph, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , release=553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph) Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.70069422 +0000 UTC m=+0.182351870 container start 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553) Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.700954037 +0000 UTC m=+0.182612007 container attach 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, io.openshift.expose-services=, version=7, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:04 localhost quirky_burnell[305489]: 167 167 Oct 14 06:03:04 localhost systemd[1]: libpod-6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0.scope: Deactivated successfully. Oct 14 06:03:04 localhost podman[305473]: 2025-10-14 10:03:04.705265782 +0000 UTC m=+0.186923442 container died 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, version=7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:04 localhost podman[305494]: 2025-10-14 10:03:04.802015077 +0000 UTC m=+0.087873867 container remove 6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_burnell, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_BRANCH=main, version=7, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Oct 14 06:03:04 localhost systemd[1]: libpod-conmon-6e633f3d14619bc7624a4d0367e13e48f7ef6dce828fabcc119def7dde5b88e0.scope: Deactivated successfully. Oct 14 06:03:05 localhost ceph-mon[301930]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:03:05 localhost ceph-mon[301930]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:03:05 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:05 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:05 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:03:05 localhost podman[305565]: Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.521587531 +0000 UTC m=+0.072385462 container create 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, version=7, release=553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True) Oct 14 06:03:05 localhost systemd[1]: Started libpod-conmon-8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d.scope. Oct 14 06:03:05 localhost systemd[1]: Started libcrun container. Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.485817282 +0000 UTC m=+0.036615233 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.586876512 +0000 UTC m=+0.137674433 container init 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.596447418 +0000 UTC m=+0.147245349 container start 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.596802008 +0000 UTC m=+0.147599979 container attach 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, version=7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:03:05 localhost xenodochial_ganguly[305581]: 167 167 Oct 14 06:03:05 localhost systemd[1]: libpod-8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d.scope: Deactivated successfully. Oct 14 06:03:05 localhost podman[305565]: 2025-10-14 10:03:05.602302036 +0000 UTC m=+0.153099967 container died 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:05 localhost systemd[1]: var-lib-containers-storage-overlay-863b1a5d275b2978aa53d462691fd80fb32677d48208fb7849568c2e15f9034c-merged.mount: Deactivated successfully. Oct 14 06:03:05 localhost systemd[1]: var-lib-containers-storage-overlay-0dfd400f04f3bda19d01f9989c39d9ec731a9b0f73536dcdb38b7c8e4a565c0c-merged.mount: Deactivated successfully. Oct 14 06:03:05 localhost podman[305586]: 2025-10-14 10:03:05.709042098 +0000 UTC m=+0.096745496 container remove 8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_ganguly, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph) Oct 14 06:03:05 localhost systemd[1]: libpod-conmon-8320d941a425a386e91d337160a35a8f818c3e099d3cd042fefd54d38757b30d.scope: Deactivated successfully. Oct 14 06:03:06 localhost sshd[305644]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:06 localhost ceph-mon[301930]: mon.np0005486731@4(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:06 localhost podman[305664]: Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.567172647 +0000 UTC m=+0.079669747 container create a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main) Oct 14 06:03:06 localhost systemd[1]: Started libpod-conmon-a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023.scope. Oct 14 06:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:03:06 localhost systemd[1]: Started libcrun container. Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.537163502 +0000 UTC m=+0.049660672 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.638342345 +0000 UTC m=+0.150839445 container init a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-type=git, architecture=x86_64, distribution-scope=public) Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.650009928 +0000 UTC m=+0.162507028 container start a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public) Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.650473251 +0000 UTC m=+0.162970411 container attach a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:06 localhost hardcore_benz[305679]: 167 167 Oct 14 06:03:06 localhost systemd[1]: libpod-a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023.scope: Deactivated successfully. Oct 14 06:03:06 localhost podman[305664]: 2025-10-14 10:03:06.654041316 +0000 UTC m=+0.166538446 container died a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Oct 14 06:03:06 localhost ceph-mon[301930]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:03:06 localhost ceph-mon[301930]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:03:06 localhost ceph-mon[301930]: Saving service mon spec with placement label:mon Oct 14 06:03:06 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:06 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:06 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:06 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.682469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186682535, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 3057, "num_deletes": 512, "total_data_size": 8966681, "memory_usage": 9558704, "flush_reason": "Manual Compaction"} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186719211, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5446778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9822, "largest_seqno": 12878, "table_properties": {"data_size": 5434333, "index_size": 7350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4037, "raw_key_size": 34522, "raw_average_key_size": 21, "raw_value_size": 5405035, "raw_average_value_size": 3367, "num_data_blocks": 317, "num_entries": 1605, "num_filter_entries": 1605, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436131, "oldest_key_time": 1760436131, "file_creation_time": 1760436186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8075a854-41fd-4ab6-89af-6366aa1d00c3", "db_session_id": "PP6GOKDVVBVE8Q3KEL61", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 36800 microseconds, and 12198 cpu microseconds. Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.719274) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5446778 bytes OK Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.719301) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.721786) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.721808) EVENT_LOG_v1 {"time_micros": 1760436186721802, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.721831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 8951010, prev total WAL file size 8951010, number of live WAL files 2. Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.723847) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5319KB)], [15(9497KB)] Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186723904, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 15172193, "oldest_snapshot_seqno": -1} Oct 14 06:03:06 localhost podman[305680]: 2025-10-14 10:03:06.745226151 +0000 UTC m=+0.121843608 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9794 keys, 13104772 bytes, temperature: kUnknown Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186814316, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13104772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13046561, "index_size": 32638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 261689, "raw_average_key_size": 26, "raw_value_size": 12876529, "raw_average_value_size": 1314, "num_data_blocks": 1250, "num_entries": 9794, "num_filter_entries": 9794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436121, "oldest_key_time": 0, "file_creation_time": 1760436186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8075a854-41fd-4ab6-89af-6366aa1d00c3", "db_session_id": "PP6GOKDVVBVE8Q3KEL61", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.814770) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13104772 bytes Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.816896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.5 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.2, 9.3 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 10865, records dropped: 1071 output_compression: NoCompression Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.816927) EVENT_LOG_v1 {"time_micros": 1760436186816913, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90602, "compaction_time_cpu_micros": 35912, "output_level": 6, "num_output_files": 1, "total_output_size": 13104772, "num_input_records": 10865, "num_output_records": 9794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186817749, "job": 6, "event": "table_file_deletion", "file_number": 17} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436186819154, "job": 6, "event": "table_file_deletion", "file_number": 15} Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.723670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.819236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.819254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.819258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.819261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost ceph-mon[301930]: rocksdb: (Original Log Time 2025/10/14-10:03:06.819264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:06 localhost podman[305693]: 2025-10-14 10:03:06.828831382 +0000 UTC m=+0.166950157 container remove a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_benz, GIT_CLEAN=True, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553) Oct 14 06:03:06 localhost systemd[1]: libpod-conmon-a57652d0af380ef501b3c022019ea8dcda0569ba25a7df816d1788a8306c2023.scope: Deactivated successfully. Oct 14 06:03:06 localhost podman[305680]: 2025-10-14 10:03:06.858157219 +0000 UTC m=+0.234774716 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:03:06 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:03:07 localhost systemd[1]: tmp-crun.lR7TWj.mount: Deactivated successfully. Oct 14 06:03:07 localhost systemd[1]: var-lib-containers-storage-overlay-00dee91611e5d3fda951d3a9ed84abab3dcddce00e335fe8835a48d574abb6dc-merged.mount: Deactivated successfully. Oct 14 06:03:07 localhost podman[305778]: Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.671097397 +0000 UTC m=+0.078420544 container create 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Oct 14 06:03:07 localhost ceph-mon[301930]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:03:07 localhost ceph-mon[301930]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:03:07 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:07 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:07 localhost ceph-mon[301930]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:07 localhost systemd[1]: Started libpod-conmon-1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937.scope. Oct 14 06:03:07 localhost systemd[1]: Started libcrun container. Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.637430094 +0000 UTC m=+0.044752491 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.742784949 +0000 UTC m=+0.150107326 container init 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.760824952 +0000 UTC m=+0.168147319 container start 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, architecture=x86_64, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True) Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.761143891 +0000 UTC m=+0.168466298 container attach 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, RELEASE=main, release=553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55) Oct 14 06:03:07 localhost ecstatic_sutherland[305793]: 167 167 Oct 14 06:03:07 localhost systemd[1]: libpod-1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937.scope: Deactivated successfully. Oct 14 06:03:07 localhost podman[305778]: 2025-10-14 10:03:07.766926726 +0000 UTC m=+0.174249123 container died 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:07 localhost podman[305798]: 2025-10-14 10:03:07.86290133 +0000 UTC m=+0.085361390 container remove 1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_sutherland, RELEASE=main, version=7, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:03:07 localhost systemd[1]: libpod-conmon-1e56ff24f333534e61504800386ee987352bc3988c0a10f25c1dfe3511117937.scope: Deactivated successfully. Oct 14 06:03:08 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd4f20 mon_map magic: 0 from mon.4 v2:172.18.0.106:3300/0 Oct 14 06:03:08 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557aa3a50000 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 14 06:03:08 localhost ceph-mon[301930]: mon.np0005486731@4(peon) e8 removed from monmap, suicide. Oct 14 06:03:08 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd51e0 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 14 06:03:08 localhost podman[305863]: 2025-10-14 10:03:08.412100976 +0000 UTC m=+0.050469544 container died 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:03:08 localhost podman[305863]: 2025-10-14 10:03:08.443800375 +0000 UTC m=+0.082168903 container remove 8bb7ee7976ae565d31875715174b77f21bcce98caa433d6330d6cd13c64416f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7) Oct 14 06:03:08 localhost podman[305927]: Oct 14 06:03:08 localhost systemd[1]: tmp-crun.qYpy2u.mount: Deactivated successfully. Oct 14 06:03:08 localhost systemd[1]: var-lib-containers-storage-overlay-d76e93a932789ca3c035d796e8b56856074657128776330046a0542ddb037ea7-merged.mount: Deactivated successfully. Oct 14 06:03:08 localhost systemd[1]: var-lib-containers-storage-overlay-38b1814c433f574c7d4162875dc1d024d406d345df0a3d2dd282f4d57e62ecf8-merged.mount: Deactivated successfully. Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.640743276 +0000 UTC m=+0.108513541 container create 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, RELEASE=main, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.583349077 +0000 UTC m=+0.051119382 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:08 localhost systemd[1]: Started libpod-conmon-7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4.scope. Oct 14 06:03:08 localhost systemd[1]: tmp-crun.8NI7Gb.mount: Deactivated successfully. Oct 14 06:03:08 localhost systemd[1]: Started libcrun container. Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.739674369 +0000 UTC m=+0.207444634 container init 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public) Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.750613832 +0000 UTC m=+0.218384097 container start 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, name=rhceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.75087649 +0000 UTC m=+0.218646755 container attach 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, GIT_CLEAN=True) Oct 14 06:03:08 localhost flamboyant_shtern[305953]: 167 167 Oct 14 06:03:08 localhost systemd[1]: libpod-7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4.scope: Deactivated successfully. Oct 14 06:03:08 localhost podman[305927]: 2025-10-14 10:03:08.75540512 +0000 UTC m=+0.223175405 container died 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:03:08 localhost podman[305958]: 2025-10-14 10:03:08.860628862 +0000 UTC m=+0.093696044 container remove 7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_shtern, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, version=7, build-date=2025-09-24T08:57:55, ceph=True, release=553) Oct 14 06:03:08 localhost systemd[1]: libpod-conmon-7f67953e93a71e4c8a64fe6e9b202cdaaee9808e192a02f4e6e0604905ad23b4.scope: Deactivated successfully. Oct 14 06:03:09 localhost sshd[306045]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:09 localhost systemd[1]: ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf@mon.np0005486731.service: Deactivated successfully. Oct 14 06:03:09 localhost systemd[1]: Stopped Ceph mon.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 06:03:09 localhost systemd[1]: ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf@mon.np0005486731.service: Consumed 4.102s CPU time. Oct 14 06:03:09 localhost systemd[1]: Reloading. Oct 14 06:03:09 localhost systemd-sysv-generator[306077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:03:09 localhost systemd-rc-local-generator[306071]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:03:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:03:09 localhost systemd[1]: var-lib-containers-storage-overlay-0ff197c6f75287035ae10e45cb079ce1ce93f66833cb9e356f47708571e165d0-merged.mount: Deactivated successfully. Oct 14 06:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:03:10 localhost systemd[1]: tmp-crun.OSt4KC.mount: Deactivated successfully. Oct 14 06:03:10 localhost podman[306086]: 2025-10-14 10:03:10.564019466 +0000 UTC m=+0.100909647 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:03:10 localhost podman[306086]: 2025-10-14 10:03:10.571107925 +0000 UTC m=+0.107998066 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:03:10 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:03:12 localhost sshd[306106]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:03:13 localhost podman[306109]: 2025-10-14 10:03:13.002951941 +0000 UTC m=+0.096371145 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2) Oct 14 06:03:13 localhost podman[306108]: 2025-10-14 10:03:13.084091966 +0000 UTC m=+0.180290034 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:03:13 localhost podman[306109]: 2025-10-14 10:03:13.090064197 +0000 UTC m=+0.183483341 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:03:13 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:03:13 localhost podman[306110]: 2025-10-14 10:03:13.107469524 +0000 UTC m=+0.197568469 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:03:13 localhost podman[306108]: 2025-10-14 10:03:13.128023855 +0000 UTC m=+0.224221873 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:03:13 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:03:13 localhost podman[306110]: 2025-10-14 10:03:13.140920171 +0000 UTC m=+0.231019126 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:03:13 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.926 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.927 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:03:13 localhost nova_compute[295778]: 2025-10-14 10:03:13.927 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.391 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.578 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.580 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12336MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.580 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.581 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.664 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.664 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:03:14 localhost nova_compute[295778]: 2025-10-14 10:03:14.686 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:03:15 localhost nova_compute[295778]: 2025-10-14 10:03:15.079 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:03:15 localhost nova_compute[295778]: 2025-10-14 10:03:15.086 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:03:15 localhost nova_compute[295778]: 2025-10-14 10:03:15.102 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:03:15 localhost nova_compute[295778]: 2025-10-14 10:03:15.104 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:03:15 localhost nova_compute[295778]: 2025-10-14 10:03:15.105 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:03:16 localhost sshd[306220]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:17 localhost nova_compute[295778]: 2025-10-14 10:03:17.106 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:17 localhost nova_compute[295778]: 2025-10-14 10:03:17.106 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:17 localhost nova_compute[295778]: 2025-10-14 10:03:17.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:17 localhost nova_compute[295778]: 2025-10-14 10:03:17.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:17 localhost nova_compute[295778]: 2025-10-14 10:03:17.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:18 localhost nova_compute[295778]: 2025-10-14 10:03:18.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:18 localhost nova_compute[295778]: 2025-10-14 10:03:18.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:18 localhost nova_compute[295778]: 2025-10-14 10:03:18.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:03:18 localhost nova_compute[295778]: 2025-10-14 10:03:18.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:03:18 localhost nova_compute[295778]: 2025-10-14 10:03:18.921 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:03:19 localhost sshd[306317]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:19 localhost podman[306332]: 2025-10-14 10:03:19.552873796 +0000 UTC m=+0.096088467 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:03:19 localhost podman[306332]: 2025-10-14 10:03:19.654628685 +0000 UTC m=+0.197843316 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., ceph=True, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=) Oct 14 06:03:19 localhost nova_compute[295778]: 2025-10-14 10:03:19.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:03:19 localhost nova_compute[295778]: 2025-10-14 10:03:19.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:03:22 localhost podman[306851]: Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.324071752 +0000 UTC m=+0.080988572 container create bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, release=553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:03:22 localhost systemd[1]: Started libpod-conmon-bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959.scope. Oct 14 06:03:22 localhost systemd[1]: Started libcrun container. Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.290828741 +0000 UTC m=+0.047745611 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.394187282 +0000 UTC m=+0.151104102 container init bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, architecture=x86_64, ceph=True, vendor=Red Hat, Inc.) Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.411504927 +0000 UTC m=+0.168421757 container start bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.411776894 +0000 UTC m=+0.168693714 container attach bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, release=553, ceph=True, name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:03:22 localhost laughing_mcnulty[306866]: 167 167 Oct 14 06:03:22 localhost systemd[1]: libpod-bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959.scope: Deactivated successfully. Oct 14 06:03:22 localhost podman[306851]: 2025-10-14 10:03:22.415416381 +0000 UTC m=+0.172333201 container died bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:22 localhost podman[306871]: 2025-10-14 10:03:22.507234693 +0000 UTC m=+0.082794560 container remove bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcnulty, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph) Oct 14 06:03:22 localhost systemd[1]: libpod-conmon-bb87f6fa9c9598b5f6f79870478ada40474d6d0c6dbfc1d39ab92a153e7f9959.scope: Deactivated successfully. Oct 14 06:03:22 localhost podman[306888]: Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.619965506 +0000 UTC m=+0.076631056 container create 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, release=553, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux ) Oct 14 06:03:22 localhost systemd[1]: Started libpod-conmon-2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb.scope. Oct 14 06:03:22 localhost systemd[1]: Started libcrun container. Oct 14 06:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a12f70b711f621448362d457dac4b34b99fe6abe4ba44341597b4aa4d03ba53/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a12f70b711f621448362d457dac4b34b99fe6abe4ba44341597b4aa4d03ba53/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a12f70b711f621448362d457dac4b34b99fe6abe4ba44341597b4aa4d03ba53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a12f70b711f621448362d457dac4b34b99fe6abe4ba44341597b4aa4d03ba53/merged/var/lib/ceph/mon/ceph-np0005486731 supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.677085028 +0000 UTC m=+0.133750568 container init 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7) Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.588495352 +0000 UTC m=+0.045160952 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.687511647 +0000 UTC m=+0.144177197 container start 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, name=rhceph, release=553) Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.687793975 +0000 UTC m=+0.144459515 container attach 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, io.openshift.tags=rhceph ceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-type=git, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:22 localhost sshd[306911]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:22 localhost systemd[1]: libpod-2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb.scope: Deactivated successfully. Oct 14 06:03:22 localhost podman[306888]: 2025-10-14 10:03:22.783770669 +0000 UTC m=+0.240436269 container died 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, release=553, build-date=2025-09-24T08:57:55, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:22 localhost podman[306931]: 2025-10-14 10:03:22.875151448 +0000 UTC m=+0.082987495 container remove 2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_shirley, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True) Oct 14 06:03:22 localhost systemd[1]: libpod-conmon-2d9374802a312502e3f45e47a5ac1d7fc2783eb4f8f3e5265bcd0c024954c8bb.scope: Deactivated successfully. Oct 14 06:03:22 localhost systemd[1]: Reloading. Oct 14 06:03:23 localhost systemd-sysv-generator[306971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:03:23 localhost systemd-rc-local-generator[306967]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:03:23 localhost sshd[306980]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:03:23 localhost systemd[1]: tmp-crun.XP3uEq.mount: Deactivated successfully. Oct 14 06:03:23 localhost systemd[1]: var-lib-containers-storage-overlay-b52eef97801b119aa9c1861618b14381fe49ae1d6d5e1ec285a5f0c4ff685356-merged.mount: Deactivated successfully. Oct 14 06:03:23 localhost systemd[1]: Reloading. Oct 14 06:03:23 localhost systemd-sysv-generator[307016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 14 06:03:23 localhost systemd-rc-local-generator[307011]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 14 06:03:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 14 06:03:23 localhost systemd[1]: Starting Ceph mon.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf... Oct 14 06:03:24 localhost podman[307075]: Oct 14 06:03:24 localhost podman[307075]: 2025-10-14 10:03:24.014039636 +0000 UTC m=+0.073511222 container create dad8389d42dac2a67e6b2ea50b40d1035aa726995521436a611cb4746e401386 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Oct 14 06:03:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba698147ec2d7251847de347550cac40b02d6bafa590edcdb5d194e9c7b2499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba698147ec2d7251847de347550cac40b02d6bafa590edcdb5d194e9c7b2499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba698147ec2d7251847de347550cac40b02d6bafa590edcdb5d194e9c7b2499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba698147ec2d7251847de347550cac40b02d6bafa590edcdb5d194e9c7b2499/merged/var/lib/ceph/mon/ceph-np0005486731 supports timestamps until 2038 (0x7fffffff) Oct 14 06:03:24 localhost podman[307075]: 2025-10-14 10:03:24.066669187 +0000 UTC m=+0.126140783 container init dad8389d42dac2a67e6b2ea50b40d1035aa726995521436a611cb4746e401386 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:03:24 localhost podman[307075]: 2025-10-14 10:03:24.076700796 +0000 UTC m=+0.136172412 container start dad8389d42dac2a67e6b2ea50b40d1035aa726995521436a611cb4746e401386 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mon-np0005486731, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64) Oct 14 06:03:24 localhost bash[307075]: dad8389d42dac2a67e6b2ea50b40d1035aa726995521436a611cb4746e401386 Oct 14 06:03:24 localhost podman[307075]: 2025-10-14 10:03:23.983542758 +0000 UTC m=+0.043014384 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:24 localhost systemd[1]: Started Ceph mon.np0005486731 for fcadf6e2-9176-5818-a8d0-37b19acf8eaf. Oct 14 06:03:24 localhost ceph-mon[307093]: set uid:gid to 167:167 (ceph:ceph) Oct 14 06:03:24 localhost ceph-mon[307093]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 14 06:03:24 localhost ceph-mon[307093]: pidfile_write: ignore empty --pid-file Oct 14 06:03:24 localhost ceph-mon[307093]: load: jerasure load: lrc Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: RocksDB version: 7.9.2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Git sha 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: DB SUMMARY Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: DB Session ID: J53B5YABCFHMI3BNHYZN Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: CURRENT file: CURRENT Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: IDENTITY file: IDENTITY Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005486731/store.db dir, Total Num: 0, files: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005486731/store.db: 000004.log size: 886 ; Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.error_if_exists: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.create_if_missing: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.paranoid_checks: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.env: 0x563d496cf9e0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.fs: PosixFileSystem Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.info_log: 0x563d4a772d20 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_file_opening_threads: 16 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.statistics: (nil) Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.use_fsync: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_log_file_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.log_file_time_to_roll: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.keep_log_file_num: 1000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.recycle_log_file_num: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_fallocate: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_mmap_reads: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_mmap_writes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.use_direct_reads: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.create_missing_column_families: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.db_log_dir: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.wal_dir: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.table_cache_numshardbits: 6 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.advise_random_on_open: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.db_write_buffer_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.write_buffer_manager: 0x563d4a783540 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.use_adaptive_mutex: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.rate_limiter: (nil) Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.wal_recovery_mode: 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enable_thread_tracking: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enable_pipelined_write: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.unordered_write: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.row_cache: None Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.wal_filter: None Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_ingest_behind: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.two_write_queues: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.manual_wal_flush: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.wal_compression: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.atomic_flush: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.persist_stats_to_disk: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.log_readahead_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.best_efforts_recovery: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.allow_data_in_errors: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.db_host_id: __hostname__ Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enforce_single_del_contracts: true Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_background_jobs: 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_background_compactions: -1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_subcompactions: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.delayed_write_rate : 16777216 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_total_wal_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.stats_dump_period_sec: 600 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.stats_persist_period_sec: 600 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_open_files: -1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bytes_per_sync: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_readahead_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_background_flushes: -1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Compression algorithms supported: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kZSTD supported: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kXpressCompression supported: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kBZip2Compression supported: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kLZ4Compression supported: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kZlibCompression supported: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: #011kSnappyCompression supported: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: DMutex implementation: pthread_mutex_t Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005486731/store.db/MANIFEST-000005 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.merge_operator: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_filter: None Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_filter_factory: None Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.sst_partitioner_factory: None Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.memtable_factory: SkipListFactory Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.table_factory: BlockBasedTable Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563d4a772980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563d4a76f350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.write_buffer_size: 33554432 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_write_buffer_number: 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression: NoCompression Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression: Disabled Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.prefix_extractor: nullptr Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.num_levels: 7 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.window_bits: -14 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.level: 32767 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.strategy: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.enabled: false Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.target_file_size_base: 67108864 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.target_file_size_multiplier: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.arena_block_size: 1048576 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.disable_auto_compactions: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.table_properties_collectors: Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.inplace_update_support: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.memtable_huge_page_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.bloom_locality: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.max_successive_merges: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.paranoid_file_checks: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.force_consistency_checks: 1 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.report_bg_io_stats: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.ttl: 2592000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enable_blob_files: false Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.min_blob_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_file_size: 268435456 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_compression_type: NoCompression Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.enable_blob_garbage_collection: false Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.blob_file_starting_level: 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005486731/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2c79d823-d159-4ad5-90f1-f3d028b9aa80 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436204140981, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436204145286, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436204145513, "job": 1, "event": "recovery_finished"} Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563d4a796e00 Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: DB pointer 0x563d4a88c000 Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731 does not exist in monmap, will attempt to join an existing cluster Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:03:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d4a76f350#2 capacity: 512.00 MB usage: 1.30 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,1.08 KB,0.000205636%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 14 06:03:24 localhost ceph-mon[307093]: using public_addr v2:172.18.0.103:0/0 -> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] Oct 14 06:03:24 localhost ceph-mon[307093]: starting mon.np0005486731 rank -1 at public addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] at bind addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005486731 fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(???) e0 preinit fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing) e8 sync_obtain_latest_monmap Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing) e8 sync_obtain_latest_monmap obtained monmap e8 Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).mds e16 new map Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:11:54.831494+0000#012modified#0112025-10-14T10:00:48.835986+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01178#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26888}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26888 members: 26888#012[mds.mds.np0005486732.xkownj{0:26888} state up:active seq 13 addr [v2:172.18.0.107:6808/1205328170,v1:172.18.0.107:6809/1205328170] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005486733.tvstmf{-1:17244} state up:standby seq 1 addr [v2:172.18.0.108:6808/3626555326,v1:172.18.0.108:6809/3626555326] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005486731.onyaog{-1:17256} state up:standby seq 1 addr [v2:172.18.0.106:6808/799411272,v1:172.18.0.106:6809/799411272] compat {c=[1],r=[1],i=[17ff]}] Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).osd e80 crush map has features 3314933000852226048, adjusting msgr requires Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).osd e80 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).osd e80 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).osd e80 crush map has features 288514051259236352, adjusting msgr requires Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Deploying daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486729.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:03:24 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:24 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:24 localhost ceph-mon[307093]: mon.np0005486731@-1(synchronizing).paxosservice(auth 1..36) refresh upgraded, format 0 -> 3 Oct 14 06:03:24 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x557a99fd4f20 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 14 06:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:03:25 localhost podman[307132]: 2025-10-14 10:03:25.216137448 +0000 UTC m=+0.082649866 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:03:25 localhost podman[307132]: 2025-10-14 10:03:25.23222392 +0000 UTC m=+0.098736348 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:03:25 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:03:26 localhost sshd[307152]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:26 localhost ceph-mon[307093]: mon.np0005486731@-1(probing) e9 my rank is now 4 (was -1) Oct 14 06:03:26 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:03:26 localhost ceph-mon[307093]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Oct 14 06:03:26 localhost ceph-mon[307093]: mon.np0005486731@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:03:29 localhost ceph-mds[299096]: mds.beacon.mds.np0005486731.onyaog missed beacon ack from the monitors Oct 14 06:03:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:03:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:03:30 localhost systemd[1]: tmp-crun.JA7kEs.mount: Deactivated successfully. Oct 14 06:03:30 localhost podman[307156]: 2025-10-14 10:03:30.559614525 +0000 UTC m=+0.089971294 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:03:30 localhost podman[246584]: time="2025-10-14T10:03:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:03:30 localhost podman[307155]: 2025-10-14 10:03:30.62243963 +0000 UTC m=+0.154237437 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 06:03:30 localhost podman[307156]: 2025-10-14 10:03:30.648368015 +0000 UTC m=+0.178724794 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:03:30 localhost podman[246584]: @ - - [14/Oct/2025:10:03:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:03:30 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:03:30 localhost podman[307155]: 2025-10-14 10:03:30.707782098 +0000 UTC m=+0.239579905 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:03:30 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:03:30 localhost podman[246584]: @ - - [14/Oct/2025:10:03:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18331 "" "Go-http-client/1.1" Oct 14 06:03:31 localhost podman[307248]: Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.24337965 +0000 UTC m=+0.071792647 container create ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, name=rhceph, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Oct 14 06:03:31 localhost systemd[1]: Started libpod-conmon-ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5.scope. Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.208872784 +0000 UTC m=+0.037285791 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:31 localhost systemd[1]: Started libcrun container. Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.326838197 +0000 UTC m=+0.155251204 container init ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_CLEAN=True, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph) Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.338659414 +0000 UTC m=+0.167072411 container start ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=) Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.339105466 +0000 UTC m=+0.167518523 container attach ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, version=7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:03:31 localhost heuristic_wescoff[307263]: 167 167 Oct 14 06:03:31 localhost systemd[1]: libpod-ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5.scope: Deactivated successfully. Oct 14 06:03:31 localhost podman[307248]: 2025-10-14 10:03:31.342182989 +0000 UTC m=+0.170596006 container died ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:03:31 localhost podman[307268]: 2025-10-14 10:03:31.443334641 +0000 UTC m=+0.083350546 container remove ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_wescoff, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph) Oct 14 06:03:31 localhost systemd[1]: libpod-conmon-ce31fe092bb2772d13b50f786dcd7673e605e9988cec854a9e79fa1d272104f5.scope: Deactivated successfully. Oct 14 06:03:31 localhost systemd[1]: var-lib-containers-storage-overlay-0b5b651dccc5e2b103cd333a469248c5b327fe85cf9d886de365c0907832f7fe-merged.mount: Deactivated successfully. Oct 14 06:03:31 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:03:31 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486729 calling monitor election Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486729,np0005486733,np0005486732 in quorum (ranks 0,1,2,3) Oct 14 06:03:31 localhost ceph-mon[307093]: Health check failed: 1/5 mons down, quorum np0005486730,np0005486729,np0005486733,np0005486732 (MON_DOWN) Oct 14 06:03:31 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005486730,np0005486729,np0005486733,np0005486732 Oct 14 06:03:31 localhost ceph-mon[307093]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005486730,np0005486729,np0005486733,np0005486732 Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:31 localhost ceph-mon[307093]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:03:31 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:31 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:31 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:03:31 localhost ceph-mon[307093]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 14 06:03:31 localhost ceph-mon[307093]: mon.np0005486731@4(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:03:31 localhost ceph-mon[307093]: mgrc update_daemon_metadata mon.np0005486731 metadata {addrs=[v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005486731.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005486731.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 14 06:03:32 localhost podman[307337]: Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.123757685 +0000 UTC m=+0.069049432 container create ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, architecture=x86_64, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main) Oct 14 06:03:32 localhost systemd[1]: Started libpod-conmon-ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395.scope. Oct 14 06:03:32 localhost systemd[1]: Started libcrun container. Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.191315817 +0000 UTC m=+0.136607564 container init ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, GIT_BRANCH=main, vcs-type=git, name=rhceph, release=553, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph) Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.093542505 +0000 UTC m=+0.038834302 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.204636244 +0000 UTC m=+0.149928001 container start ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.205008304 +0000 UTC m=+0.150300091 container attach ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux ) Oct 14 06:03:32 localhost loving_turing[307353]: 167 167 Oct 14 06:03:32 localhost systemd[1]: libpod-ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395.scope: Deactivated successfully. Oct 14 06:03:32 localhost podman[307337]: 2025-10-14 10:03:32.207741407 +0000 UTC m=+0.153033154 container died ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, RELEASE=main, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:03:32 localhost podman[307358]: 2025-10-14 10:03:32.30930205 +0000 UTC m=+0.087686062 container remove ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_turing, build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph) Oct 14 06:03:32 localhost systemd[1]: libpod-conmon-ebca521e342f41adba8cb1be5583e581fb7df05c3439b8f8e3a3e15296c18395.scope: Deactivated successfully. Oct 14 06:03:32 localhost systemd[1]: tmp-crun.MgP4eg.mount: Deactivated successfully. Oct 14 06:03:32 localhost systemd[1]: var-lib-containers-storage-overlay-fb6a11f0bd5738b48449d0748189dedd7676a2ecd2da173ef49a20dd6ef4e2b0-merged.mount: Deactivated successfully. Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:03:32 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486729 calling monitor election Oct 14 06:03:32 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486729,np0005486733,np0005486732,np0005486731 in quorum (ranks 0,1,2,3,4) Oct 14 06:03:32 localhost ceph-mon[307093]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005486730,np0005486729,np0005486733,np0005486732) Oct 14 06:03:32 localhost ceph-mon[307093]: Cluster is now healthy Oct 14 06:03:32 localhost ceph-mon[307093]: overall HEALTH_OK Oct 14 06:03:32 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:32 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:32 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:03:33 localhost podman[307436]: Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.137127797 +0000 UTC m=+0.082089032 container create 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Oct 14 06:03:33 localhost systemd[1]: Started libpod-conmon-46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260.scope. Oct 14 06:03:33 localhost systemd[1]: Started libcrun container. Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.104207355 +0000 UTC m=+0.049168640 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.212977321 +0000 UTC m=+0.157938556 container init 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, name=rhceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55) Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.230209473 +0000 UTC m=+0.175170718 container start 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.230500041 +0000 UTC m=+0.175461306 container attach 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=553) Oct 14 06:03:33 localhost flamboyant_liskov[307451]: 167 167 Oct 14 06:03:33 localhost systemd[1]: libpod-46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260.scope: Deactivated successfully. Oct 14 06:03:33 localhost podman[307436]: 2025-10-14 10:03:33.233488961 +0000 UTC m=+0.178450196 container died 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git) Oct 14 06:03:33 localhost openstack_network_exporter[248748]: ERROR 10:03:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:03:33 localhost openstack_network_exporter[248748]: ERROR 10:03:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:03:33 localhost openstack_network_exporter[248748]: ERROR 10:03:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:03:33 localhost openstack_network_exporter[248748]: ERROR 10:03:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:03:33 localhost openstack_network_exporter[248748]: Oct 14 06:03:33 localhost openstack_network_exporter[248748]: ERROR 10:03:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:03:33 localhost openstack_network_exporter[248748]: Oct 14 06:03:33 localhost podman[307456]: 2025-10-14 10:03:33.365567282 +0000 UTC m=+0.120016069 container remove 46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_liskov, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 14 06:03:33 localhost systemd[1]: libpod-conmon-46133876b4b7408e7d1c8ed49aa53bdc9e9dd26e92005030ce38818eb75a4260.scope: Deactivated successfully. Oct 14 06:03:33 localhost systemd[1]: tmp-crun.pYISlh.mount: Deactivated successfully. Oct 14 06:03:33 localhost systemd[1]: var-lib-containers-storage-overlay-41357a27f09f75560d3363cce1b63ae8c6d1641f4372e1db97a1a58ac132e218-merged.mount: Deactivated successfully. Oct 14 06:03:33 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:03:33 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:03:34 localhost podman[307531]: Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.188342674 +0000 UTC m=+0.080249333 container create 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True) Oct 14 06:03:34 localhost systemd[1]: Started libpod-conmon-6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4.scope. Oct 14 06:03:34 localhost systemd[1]: Started libcrun container. Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.156781478 +0000 UTC m=+0.048688157 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.257003264 +0000 UTC m=+0.148909923 container init 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, CEPH_POINT_RELEASE=, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., name=rhceph) Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.266535241 +0000 UTC m=+0.158441900 container start 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553, vendor=Red Hat, Inc.) Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.266855299 +0000 UTC m=+0.158762028 container attach 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:34 localhost heuristic_carver[307546]: 167 167 Oct 14 06:03:34 localhost systemd[1]: libpod-6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4.scope: Deactivated successfully. Oct 14 06:03:34 localhost podman[307531]: 2025-10-14 10:03:34.268978186 +0000 UTC m=+0.160884865 container died 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Oct 14 06:03:34 localhost podman[307551]: 2025-10-14 10:03:34.365181185 +0000 UTC m=+0.081965148 container remove 6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_carver, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, maintainer=Guillaume Abrioux , release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public) Oct 14 06:03:34 localhost systemd[1]: libpod-conmon-6509fb2b9d851082b2f4ebbbaa781d0da6d415f8ff23ef5f3970afc25f1fb5d4.scope: Deactivated successfully. Oct 14 06:03:34 localhost systemd[1]: var-lib-containers-storage-overlay-09469dc3e3de487690203f17325999332038f5480da55a65465aaf604514d0d4-merged.mount: Deactivated successfully. Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:34 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:34 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:34 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:35 localhost podman[307622]: Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.06090366 +0000 UTC m=+0.081115176 container create 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, io.buildah.version=1.33.12, version=7, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 14 06:03:35 localhost systemd[1]: Started libpod-conmon-46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051.scope. Oct 14 06:03:35 localhost systemd[1]: Started libcrun container. Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.123123969 +0000 UTC m=+0.143335485 container init 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, name=rhceph, GIT_CLEAN=True) Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.028667566 +0000 UTC m=+0.048879082 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.134557225 +0000 UTC m=+0.154768741 container start 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7) Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.134891514 +0000 UTC m=+0.155103050 container attach 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:03:35 localhost jovial_gauss[307637]: 167 167 Oct 14 06:03:35 localhost systemd[1]: libpod-46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051.scope: Deactivated successfully. Oct 14 06:03:35 localhost podman[307622]: 2025-10-14 10:03:35.141585253 +0000 UTC m=+0.161796799 container died 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:35 localhost podman[307642]: 2025-10-14 10:03:35.238916723 +0000 UTC m=+0.086466849 container remove 46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_gauss, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=553) Oct 14 06:03:35 localhost systemd[1]: libpod-conmon-46f79c9bc50fb3bd6db12ebf0176d4d8d0794e0babb400d828022297fc4fd051.scope: Deactivated successfully. Oct 14 06:03:35 localhost systemd[1]: var-lib-containers-storage-overlay-45267c8f600a71ad3792356219dd34422cddc78bf334057a61c7a3632fdfe229-merged.mount: Deactivated successfully. Oct 14 06:03:35 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:03:35 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:03:35 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:35 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:35 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:36 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:03:36 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:03:36 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:36 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:36 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:03:37 localhost podman[307659]: 2025-10-14 10:03:37.538685318 +0000 UTC m=+0.078084585 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 14 06:03:37 localhost podman[307659]: 2025-10-14 10:03:37.555224612 +0000 UTC m=+0.094623879 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:03:37 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:03:37 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:03:37 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:37 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:03:38 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e80 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 14 06:03:38 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e80 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 14 06:03:38 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 e81: 6 total, 6 up, 6 in Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr handle_mgr_map Activating! Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr handle_mgr_map I am now activating Oct 14 06:03:38 localhost ceph-mon[307093]: Reconfig service osd.default_drive_group Oct 14 06:03:38 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:03:38 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:03:38 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:38 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:38 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:38 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' Oct 14 06:03:38 localhost ceph-mon[307093]: from='mgr.14184 172.18.0.105:0/819734915' entity='mgr.np0005486730.ddfidc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:38 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:03:38 localhost ceph-mon[307093]: Activating manager daemon np0005486731.swasqz Oct 14 06:03:38 localhost ceph-mon[307093]: from='client.? 172.18.0.200:0/3558168517' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:03:38 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:03:38 localhost systemd[1]: session-67.scope: Deactivated successfully. Oct 14 06:03:38 localhost systemd[1]: session-67.scope: Consumed 26.149s CPU time. Oct 14 06:03:38 localhost systemd-logind[760]: Session 67 logged out. Waiting for processes to exit. Oct 14 06:03:38 localhost systemd-logind[760]: Removed session 67. Oct 14 06:03:38 localhost ceph-mgr[300442]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: balancer Oct 14 06:03:38 localhost ceph-mgr[300442]: [balancer INFO root] Starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:03:38 Oct 14 06:03:38 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:03:38 localhost ceph-mgr[300442]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Oct 14 06:03:38 localhost ceph-mgr[300442]: [cephadm WARNING root] removing stray HostCache host record np0005486728.localdomain.devices.0 Oct 14 06:03:38 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005486728.localdomain.devices.0 Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: cephadm Oct 14 06:03:38 localhost ceph-mgr[300442]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: crash Oct 14 06:03:38 localhost ceph-mgr[300442]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: devicehealth Oct 14 06:03:38 localhost ceph-mgr[300442]: [devicehealth INFO root] Starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: iostat Oct 14 06:03:38 localhost ceph-mgr[300442]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: nfs Oct 14 06:03:38 localhost ceph-mgr[300442]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: orchestrator Oct 14 06:03:38 localhost ceph-mgr[300442]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: pg_autoscaler Oct 14 06:03:38 localhost ceph-mgr[300442]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: progress Oct 14 06:03:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: [progress INFO root] Loading... Oct 14 06:03:38 localhost ceph-mgr[300442]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Oct 14 06:03:38 localhost ceph-mgr[300442]: [progress INFO root] Loaded OSDMap, ready. Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] recovery thread starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] starting setup Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: rbd_support Oct 14 06:03:38 localhost ceph-mgr[300442]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: restful Oct 14 06:03:38 localhost ceph-mgr[300442]: [restful INFO root] server_addr: :: server_port: 8003 Oct 14 06:03:38 localhost ceph-mgr[300442]: [restful WARNING root] server not running: no certificate configured Oct 14 06:03:38 localhost ceph-mgr[300442]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: status Oct 14 06:03:38 localhost ceph-mgr[300442]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: telemetry Oct 14 06:03:38 localhost ceph-mgr[300442]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:03:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:03:38 localhost ceph-mgr[300442]: mgr load Constructed class from module: volumes Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] PerfHandler: starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: vms, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.938+0000 7fb22e4a0640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.938+0000 7fb22e4a0640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.938+0000 7fb22e4a0640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.939+0000 7fb22e4a0640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.939+0000 7fb22e4a0640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.943+0000 7fb22bc9b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.943+0000 7fb22bc9b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.943+0000 7fb22bc9b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.943+0000 7fb22bc9b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:03:38.943+0000 7fb22bc9b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: volumes, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: images, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: backups, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] TaskHandler: starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Oct 14 06:03:38 localhost ceph-mgr[300442]: [rbd_support INFO root] setup complete Oct 14 06:03:39 localhost sshd[307818]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:03:39 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 _set_new_cache_sizes cache_size:1019475657 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:39 localhost systemd-logind[760]: New session 70 of user ceph-admin. Oct 14 06:03:39 localhost systemd[1]: Started Session 70 of User ceph-admin. Oct 14 06:03:39 localhost ceph-mon[307093]: Manager daemon np0005486731.swasqz is now available Oct 14 06:03:39 localhost ceph-mon[307093]: removing stray HostCache host record np0005486728.localdomain.devices.0 Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"}]': finished Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486728.localdomain.devices.0"}]': finished Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/mirror_snapshot_schedule"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/mirror_snapshot_schedule"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/trash_purge_schedule"} : dispatch Oct 14 06:03:39 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/trash_purge_schedule"} : dispatch Oct 14 06:03:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:40 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:03:40] ENGINE Bus STARTING Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:03:40] ENGINE Bus STARTING Oct 14 06:03:40 localhost podman[307935]: 2025-10-14 10:03:40.235853708 +0000 UTC m=+0.082036271 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:40 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:03:40] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:03:40] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:03:40 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:03:40] ENGINE Client ('172.18.0.106', 58016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:03:40] ENGINE Client ('172.18.0.106', 58016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:03:40 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:03:40] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:03:40] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:03:40 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:03:40] ENGINE Bus STARTED Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:03:40] ENGINE Bus STARTED Oct 14 06:03:40 localhost podman[307935]: 2025-10-14 10:03:40.367134148 +0000 UTC m=+0.213316741 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, io.openshift.expose-services=, release=553, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main) Oct 14 06:03:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:03:40 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:40 localhost podman[308024]: 2025-10-14 10:03:40.770323219 +0000 UTC m=+0.106284591 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:03:40 localhost podman[308024]: 2025-10-14 10:03:40.777002598 +0000 UTC m=+0.112964040 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 06:03:40 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:03:41 localhost ceph-mgr[300442]: [devicehealth INFO root] Check health Oct 14 06:03:41 localhost ceph-mon[307093]: [14/Oct/2025:10:03:40] ENGINE Bus STARTING Oct 14 06:03:41 localhost ceph-mon[307093]: [14/Oct/2025:10:03:40] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:03:41 localhost ceph-mon[307093]: [14/Oct/2025:10:03:40] ENGINE Client ('172.18.0.106', 58016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:03:41 localhost ceph-mon[307093]: [14/Oct/2025:10:03:40] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:03:41 localhost ceph-mon[307093]: [14/Oct/2025:10:03:40] ENGINE Bus STARTED Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:41 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:03:43 localhost systemd[1]: tmp-crun.Y7DEWJ.mount: Deactivated successfully. Oct 14 06:03:43 localhost podman[308303]: 2025-10-14 10:03:43.275764089 +0000 UTC m=+0.109613860 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:03:43 localhost podman[308303]: 2025-10-14 10:03:43.283564588 +0000 UTC m=+0.117414379 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm) Oct 14 06:03:43 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd/host:np0005486729", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:03:43 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:43 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:03:43 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:03:43 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:03:43 localhost podman[308342]: 2025-10-14 10:03:43.380792644 +0000 UTC m=+0.134929268 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:03:43 localhost podman[308302]: 2025-10-14 10:03:43.353840942 +0000 UTC m=+0.191809244 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:03:43 localhost podman[308342]: 2025-10-14 10:03:43.420190061 +0000 UTC m=+0.174326665 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:03:43 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:03:43 localhost podman[308302]: 2025-10-14 10:03:43.441957885 +0000 UTC m=+0.279926197 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:03:43 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:03:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:43 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mgr.np0005486730.ddfidc 172.18.0.105:0/3453239687; not ready for session (expect reconnect) Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 _set_new_cache_sizes cache_size:1020040053 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mgr.np0005486730.ddfidc 172.18.0.105:0/3453239687; not ready for session (expect reconnect) Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:44 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:44 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:44 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:44 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 0 B/s wr, 19 op/s Oct 14 06:03:45 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 2f9620ac-1618-4fec-bd52-df99af77e42d (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:03:45 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 2f9620ac-1618-4fec-bd52-df99af77e42d (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:03:45 localhost ceph-mgr[300442]: [progress INFO root] Completed event 2f9620ac-1618-4fec-bd52-df99af77e42d (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 14 06:03:46 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:03:46 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:03:46 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:03:46 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:03:46 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:46 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:03:46 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:46 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:46 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:46 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486729.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:46 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486729.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:47 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:03:47 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:03:47 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:03:47 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:03:47 localhost ceph-mon[307093]: Reconfiguring crash.np0005486729 (monmap changed)... Oct 14 06:03:47 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486729 on np0005486729.localdomain Oct 14 06:03:47 localhost ceph-mon[307093]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 14 06:03:47 localhost ceph-mon[307093]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 14 06:03:47 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:47 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:47 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:47 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486729.xpybho", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s Oct 14 06:03:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:03:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:03:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:03:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:03:48 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486729.xpybho (monmap changed)... Oct 14 06:03:48 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486729.xpybho on np0005486729.localdomain Oct 14 06:03:48 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:48 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:48 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:48 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:03:49 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:03:49 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:03:49 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 _set_new_cache_sizes cache_size:1020054364 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:49 localhost podman[309007]: Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.626171686 +0000 UTC m=+0.078993900 container create f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Oct 14 06:03:49 localhost systemd[1]: Started libpod-conmon-f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e.scope. Oct 14 06:03:49 localhost systemd[1]: Started libcrun container. Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.592931105 +0000 UTC m=+0.045753369 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.695387861 +0000 UTC m=+0.148210075 container init f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True) Oct 14 06:03:49 localhost systemd[1]: tmp-crun.Irxik1.mount: Deactivated successfully. Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.707756094 +0000 UTC m=+0.160578308 container start f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc.) Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.708045001 +0000 UTC m=+0.160867215 container attach f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Oct 14 06:03:49 localhost inspiring_bohr[309021]: 167 167 Oct 14 06:03:49 localhost systemd[1]: libpod-f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e.scope: Deactivated successfully. Oct 14 06:03:49 localhost podman[309007]: 2025-10-14 10:03:49.711606406 +0000 UTC m=+0.164428681 container died f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7) Oct 14 06:03:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s Oct 14 06:03:49 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:03:49 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:03:49 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:49 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:49 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:03:49 localhost podman[309026]: 2025-10-14 10:03:49.81690749 +0000 UTC m=+0.093262662 container remove f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bohr, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, version=7) Oct 14 06:03:49 localhost systemd[1]: libpod-conmon-f79e4f05ca35889fba48d003a57cbe603b9f3a1d722ae496f5081571567c476e.scope: Deactivated successfully. Oct 14 06:03:50 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:03:50 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:03:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.26715 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.591539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230591632, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11618, "num_deletes": 261, "total_data_size": 20682077, "memory_usage": 21824776, "flush_reason": "Manual Compaction"} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 14 06:03:50 localhost systemd[1]: var-lib-containers-storage-overlay-266926ca3658d13c2abdeba3a3fbe5299fdd10e7b6a6d8a822f144afe6d8a4af-merged.mount: Deactivated successfully. Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230654435, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 17094473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11623, "table_properties": {"data_size": 17030584, "index_size": 36199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26629, "raw_key_size": 281179, "raw_average_key_size": 26, "raw_value_size": 16845792, "raw_average_value_size": 1583, "num_data_blocks": 1385, "num_entries": 10637, "num_filter_entries": 10637, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 1760436204, "file_creation_time": 1760436230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 62965 microseconds, and 31311 cpu microseconds. Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.654504) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 17094473 bytes OK Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.654530) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.656559) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.656594) EVENT_LOG_v1 {"time_micros": 1760436230656574, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.656615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20603622, prev total WAL file size 20604209, number of live WAL files 2. Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.660517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303130' seq:72057594037927935, type:22 .. '6B760031323633' seq:0, type:0; will stop at (end) Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(16MB) 8(2012B)] Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230660753, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 17096485, "oldest_snapshot_seqno": -1} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10385 keys, 17091134 bytes, temperature: kUnknown Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230744389, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 17091134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17027947, "index_size": 36142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 277325, "raw_average_key_size": 26, "raw_value_size": 16846447, "raw_average_value_size": 1622, "num_data_blocks": 1384, "num_entries": 10385, "num_filter_entries": 10385, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.744938) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 17091134 bytes Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.746770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.8 rd, 203.7 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(16.3, 0.0 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10642, records dropped: 257 output_compression: NoCompression Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.746802) EVENT_LOG_v1 {"time_micros": 1760436230746787, "job": 4, "event": "compaction_finished", "compaction_time_micros": 83894, "compaction_time_cpu_micros": 48677, "output_level": 6, "num_output_files": 1, "total_output_size": 17091134, "num_input_records": 10642, "num_output_records": 10385, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230749399, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436230749512, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 14 06:03:50 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:03:50.660094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:03:50 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:03:50 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:50 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:50 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:50 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:50 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:03:50 localhost podman[309104]: Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.849865927 +0000 UTC m=+0.079359449 container create 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, version=7, GIT_BRANCH=main) Oct 14 06:03:50 localhost systemd[1]: Started libpod-conmon-355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3.scope. Oct 14 06:03:50 localhost systemd[1]: Started libcrun container. Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.818791193 +0000 UTC m=+0.048284755 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.925095514 +0000 UTC m=+0.154589026 container init 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, release=553, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.937436655 +0000 UTC m=+0.166930177 container start 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.937765314 +0000 UTC m=+0.167258856 container attach 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Oct 14 06:03:50 localhost dazzling_noether[309119]: 167 167 Oct 14 06:03:50 localhost systemd[1]: libpod-355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3.scope: Deactivated successfully. Oct 14 06:03:50 localhost podman[309104]: 2025-10-14 10:03:50.945980304 +0000 UTC m=+0.175473896 container died 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., release=553) Oct 14 06:03:51 localhost podman[309124]: 2025-10-14 10:03:51.04464559 +0000 UTC m=+0.087201269 container remove 355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_noether, release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-type=git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main) Oct 14 06:03:51 localhost systemd[1]: libpod-conmon-355b89ed1af94d88e87ab41d1c5b7bd285182ec100530cfa0bb62a25092a3ae3.scope: Deactivated successfully. Oct 14 06:03:51 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:03:51 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:03:51 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:03:51 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:03:51 localhost systemd[1]: tmp-crun.lb3e73.mount: Deactivated successfully. Oct 14 06:03:51 localhost systemd[1]: var-lib-containers-storage-overlay-aac3c3eefabea33c0adaf7defb7d2a2f3a03fee67d7c6be01a0c08454da191dc-merged.mount: Deactivated successfully. Oct 14 06:03:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 14 06:03:51 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:51 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:52 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:03:52 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:03:52 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:03:52 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:03:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34272 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:03:52 localhost ceph-mgr[300442]: [cephadm INFO root] Saving service mon spec with placement label:mon Oct 14 06:03:52 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Oct 14 06:03:52 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:03:52 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:03:52 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:52 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:52 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:52 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:53 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:03:53 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:03:53 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:03:53 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:03:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 14 06:03:53 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:03:53 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:03:53 localhost ceph-mon[307093]: Saving service mon spec with placement label:mon Oct 14 06:03:53 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:53 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:53 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:03:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34282 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005486731", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 14 06:03:54 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 _set_new_cache_sizes cache_size:1020054726 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:54 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:03:54 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:03:54 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:03:54 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:03:54 localhost ceph-mon[307093]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:03:54 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:03:54 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:54 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:54 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:54 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:03:55 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 14 06:03:55 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 14 06:03:55 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:03:55 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:03:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:03:55 localhost podman[309149]: 2025-10-14 10:03:55.548880424 +0000 UTC m=+0.087222310 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:03:55 localhost podman[309149]: 2025-10-14 10:03:55.559908769 +0000 UTC m=+0.098250725 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm) Oct 14 06:03:55 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:03:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 14 06:03:55 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:03:55 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:03:55 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:55 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:03:56 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 14 06:03:56 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 14 06:03:56 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:03:56 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:03:56 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:03:56 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:03:56 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:56 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:56 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:56 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:56 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:03:57 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:03:57 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:03:57 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:03:57 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:03:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:03:57.628 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:03:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:03:57.629 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:03:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:03:57.630 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:03:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:57 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:03:57 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:57 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:03:58 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:03:58 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:03:58 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:03:58 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:03:58 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:03:58 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:03:58 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:58 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:58 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:58 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:03:59 localhost ceph-mon[307093]: mon.np0005486731@4(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:03:59 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:03:59 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:03:59 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:03:59 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:03:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:03:59 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:03:59 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:03:59 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:59 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:03:59 localhost ceph-mon[307093]: Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:03:59 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:03:59 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:04:00 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 886c36ad-a298-4bc8-a1f4-aec9907a9db9 (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:04:00 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 886c36ad-a298-4bc8-a1f4-aec9907a9db9 (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:04:00 localhost ceph-mgr[300442]: [progress INFO root] Completed event 886c36ad-a298-4bc8-a1f4-aec9907a9db9 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 14 06:04:00 localhost podman[246584]: time="2025-10-14T10:04:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:04:00 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486729 (monmap changed)... Oct 14 06:04:00 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486729 (monmap changed)... Oct 14 06:04:00 localhost podman[246584]: @ - - [14/Oct/2025:10:04:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:04:00 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486729 on np0005486729.localdomain Oct 14 06:04:00 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486729 on np0005486729.localdomain Oct 14 06:04:00 localhost podman[246584]: @ - - [14/Oct/2025:10:04:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18336 "" "Go-http-client/1.1" Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:01 localhost ceph-mon[307093]: Reconfiguring mon.np0005486729 (monmap changed)... Oct 14 06:04:01 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:01 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486729 on np0005486729.localdomain Oct 14 06:04:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:04:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:04:01 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:01 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:01 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:01 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:01 localhost systemd[1]: tmp-crun.2GiHd0.mount: Deactivated successfully. Oct 14 06:04:01 localhost podman[309188]: 2025-10-14 10:04:01.559156879 +0000 UTC m=+0.097007631 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:04:01 localhost systemd[1]: tmp-crun.ioKn0I.mount: Deactivated successfully. Oct 14 06:04:01 localhost podman[309187]: 2025-10-14 10:04:01.602549173 +0000 UTC m=+0.143468027 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:04:01 localhost podman[309187]: 2025-10-14 10:04:01.632674801 +0000 UTC m=+0.173593685 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 14 06:04:01 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:04:01 localhost podman[309188]: 2025-10-14 10:04:01.652142102 +0000 UTC m=+0.189992824 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:04:01 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:04:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34313 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005486729", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 14 06:04:02 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:04:02 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:04:02 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:04:02 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:02 localhost ceph-mon[307093]: Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:02 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 ' entity='mgr.np0005486731.swasqz' Oct 14 06:04:02 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:03 localhost podman[309277]: Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.110595329 +0000 UTC m=+0.076576414 container create c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=) Oct 14 06:04:03 localhost systemd[1]: Started libpod-conmon-c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513.scope. Oct 14 06:04:03 localhost systemd[1]: Started libcrun container. Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.07931637 +0000 UTC m=+0.045297475 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.18335579 +0000 UTC m=+0.149336885 container init c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, CEPH_POINT_RELEASE=) Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.195125325 +0000 UTC m=+0.161106410 container start c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, ceph=True, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7) Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.195411583 +0000 UTC m=+0.161392678 container attach c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, distribution-scope=public, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, ceph=True, com.redhat.component=rhceph-container, RELEASE=main) Oct 14 06:04:03 localhost serene_haibt[309293]: 167 167 Oct 14 06:04:03 localhost systemd[1]: libpod-c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513.scope: Deactivated successfully. Oct 14 06:04:03 localhost podman[309277]: 2025-10-14 10:04:03.199038521 +0000 UTC m=+0.165019656 container died c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, name=rhceph, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Oct 14 06:04:03 localhost podman[309298]: 2025-10-14 10:04:03.308933007 +0000 UTC m=+0.090612610 container remove c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_haibt, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34292 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005486729"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:03 localhost ceph-mgr[300442]: [cephadm INFO root] Remove daemons mon.np0005486729 Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005486729 Oct 14 06:04:03 localhost systemd[1]: libpod-conmon-c4211ab23a537db0554440c18142b30a9d8935643dca36aedbc8ab697b026513.scope: Deactivated successfully. Oct 14 06:04:03 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005486729: new quorum should be ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731']) Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005486729: new quorum should be ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731']) Oct 14 06:04:03 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005486729 from monmap... Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing monitor np0005486729 from monmap... Oct 14 06:04:03 localhost openstack_network_exporter[248748]: ERROR 10:04:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:04:03 localhost openstack_network_exporter[248748]: ERROR 10:04:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:04:03 localhost openstack_network_exporter[248748]: ERROR 10:04:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:04:03 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:03 localhost openstack_network_exporter[248748]: ERROR 10:04:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:04:03 localhost openstack_network_exporter[248748]: Oct 14 06:04:03 localhost openstack_network_exporter[248748]: ERROR 10:04:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:04:03 localhost openstack_network_exporter[248748]: Oct 14 06:04:03 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 14 06:04:03 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 14 06:04:03 localhost ceph-mgr[300442]: client.34273 ms_handle_reset on v2:172.18.0.107:3300/0 Oct 14 06:04:03 localhost ceph-mgr[300442]: client.34268 ms_handle_reset on v2:172.18.0.107:3300/0 Oct 14 06:04:03 localhost ceph-mon[307093]: mon.np0005486731@4(peon) e10 my rank is now 3 (was 4) Oct 14 06:04:03 localhost ceph-mgr[300442]: client.34273 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:04:03 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:04:03 localhost ceph-mon[307093]: paxos.3).electionLogic(40) init, last seen epoch 40 Oct 14 06:04:03 localhost ceph-mon[307093]: mon.np0005486731@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:03 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:04:04 localhost systemd[1]: var-lib-containers-storage-overlay-f8ab262a48cf3d7aa43100f8bfb2f5381064205069d79d32e788e5980edc0448-merged.mount: Deactivated successfully. Oct 14 06:04:04 localhost systemd[1]: session-68.scope: Deactivated successfully. Oct 14 06:04:04 localhost systemd[1]: session-68.scope: Consumed 1.750s CPU time. Oct 14 06:04:04 localhost systemd-logind[760]: Session 68 logged out. Waiting for processes to exit. Oct 14 06:04:04 localhost systemd-logind[760]: Removed session 68. Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486731@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486731@3(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.418047) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245418091, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 821, "num_deletes": 252, "total_data_size": 1164700, "memory_usage": 1181200, "flush_reason": "Manual Compaction"} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245426608, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 674114, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11629, "largest_seqno": 12444, "table_properties": {"data_size": 669866, "index_size": 1911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 11151, "raw_average_key_size": 21, "raw_value_size": 660837, "raw_average_value_size": 1303, "num_data_blocks": 77, "num_entries": 507, "num_filter_entries": 507, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436230, "oldest_key_time": 1760436230, "file_creation_time": 1760436245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8606 microseconds, and 3015 cpu microseconds. Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.426653) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 674114 bytes OK Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.426675) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.428486) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.428506) EVENT_LOG_v1 {"time_micros": 1760436245428500, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.428526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1160153, prev total WAL file size 1182498, number of live WAL files 2. Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.429143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(658KB)], [15(16MB)] Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245429195, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 17765248, "oldest_snapshot_seqno": -1} Oct 14 06:04:05 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:05 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:05 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:05 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:05 localhost ceph-mon[307093]: Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:04:05 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:04:05 localhost ceph-mon[307093]: Remove daemons mon.np0005486729 Oct 14 06:04:05 localhost ceph-mon[307093]: Safe to remove mon.np0005486729: new quorum should be ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486732', 'np0005486731']) Oct 14 06:04:05 localhost ceph-mon[307093]: Removing monitor np0005486729 from monmap... Oct 14 06:04:05 localhost ceph-mon[307093]: Removing daemon mon.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:05 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mon rm", "name": "np0005486729"} : dispatch Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:04:05 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486733,np0005486732,np0005486731 in quorum (ranks 0,1,2,3) Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10357 keys, 15824943 bytes, temperature: kUnknown Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245531504, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 15824943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15763011, "index_size": 34951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 277621, "raw_average_key_size": 26, "raw_value_size": 15583015, "raw_average_value_size": 1504, "num_data_blocks": 1330, "num_entries": 10357, "num_filter_entries": 10357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Oct 14 06:04:05 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:04:05 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 14 06:04:05 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:04:05 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:04:05 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:04:05 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:05 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.531892) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 15824943 bytes Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.533586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.5 rd, 154.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 16.3 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(49.8) write-amplify(23.5) OK, records in: 10892, records dropped: 535 output_compression: NoCompression Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.533615) EVENT_LOG_v1 {"time_micros": 1760436245533603, "job": 6, "event": "compaction_finished", "compaction_time_micros": 102399, "compaction_time_cpu_micros": 47278, "output_level": 6, "num_output_files": 1, "total_output_size": 15824943, "num_input_records": 10892, "num_output_records": 10357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245533868, "job": 6, "event": "table_file_deletion", "file_number": 17} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436245536234, "job": 6, "event": "table_file_deletion", "file_number": 15} Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.429030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.536333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.536342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.536346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.536351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:05.536355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:06 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:06 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:06 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:06 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:06 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:06 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:06 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:06 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:06 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:06 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:06 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:04:07 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:07 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:07 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:07 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:07 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:07 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:07 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:07 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:07 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:04:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34301 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005486729.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:08 localhost ceph-mgr[300442]: [cephadm INFO root] Removed label mon from host np0005486729.localdomain Oct 14 06:04:08 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removed label mon from host np0005486729.localdomain Oct 14 06:04:08 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:08 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:08 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:08 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:04:08 localhost podman[309315]: 2025-10-14 10:04:08.555803133 +0000 UTC m=+0.092764538 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true) Oct 14 06:04:08 localhost podman[309315]: 2025-10-14 10:04:08.574152965 +0000 UTC m=+0.111114320 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:04:08 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:08 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:08 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:08 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:08 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:08 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:08 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:09 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:09 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:09 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:09 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:09 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34305 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005486729.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:09 localhost ceph-mgr[300442]: [cephadm INFO root] Removed label mgr from host np0005486729.localdomain Oct 14 06:04:09 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005486729.localdomain Oct 14 06:04:09 localhost ceph-mon[307093]: Removed label mon from host np0005486729.localdomain Oct 14 06:04:09 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:09 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:09 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:09 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:09 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:09 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:04:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34313 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005486729.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:10 localhost ceph-mgr[300442]: [cephadm INFO root] Removed label _admin from host np0005486729.localdomain Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005486729.localdomain Oct 14 06:04:10 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:10 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:10 localhost ceph-mon[307093]: Removed label mgr from host np0005486729.localdomain Oct 14 06:04:10 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:10 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:10 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:04:11 localhost podman[309334]: 2025-10-14 10:04:11.54668333 +0000 UTC m=+0.087563790 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:04:11 localhost podman[309334]: 2025-10-14 10:04:11.58176976 +0000 UTC m=+0.122650140 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 14 06:04:11 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:04:11 localhost ceph-mon[307093]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:04:11 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:11 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:11 localhost ceph-mon[307093]: Removed label _admin from host np0005486729.localdomain Oct 14 06:04:11 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:11 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:11 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:12 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:12 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:12 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:12 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:12 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:04:12 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:04:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:04:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:04:13 localhost podman[309353]: 2025-10-14 10:04:13.53302343 +0000 UTC m=+0.077195521 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 06:04:13 localhost podman[309353]: 2025-10-14 10:04:13.549017058 +0000 UTC m=+0.093189189 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, vcs-type=git, io.openshift.expose-services=) Oct 14 06:04:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:04:13 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:04:13 localhost systemd[1]: tmp-crun.sk2aDr.mount: Deactivated successfully. Oct 14 06:04:13 localhost podman[309354]: 2025-10-14 10:04:13.635357353 +0000 UTC m=+0.178530178 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:04:13 localhost podman[309354]: 2025-10-14 10:04:13.647956122 +0000 UTC m=+0.191128947 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:04:13 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:04:13 localhost podman[309384]: 2025-10-14 10:04:13.741463079 +0000 UTC m=+0.178374415 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 14 06:04:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:13 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:13 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:13 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:13 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:13 localhost podman[309384]: 2025-10-14 10:04:13.850228895 +0000 UTC m=+0.287140191 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:04:13 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:13 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:13 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:13 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:13 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:13 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:04:14 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:14 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 14 06:04:14 localhost systemd[304964]: Activating special unit Exit the Session... Oct 14 06:04:14 localhost systemd[304964]: Stopped target Main User Target. Oct 14 06:04:14 localhost systemd[304964]: Stopped target Basic System. Oct 14 06:04:14 localhost systemd[304964]: Stopped target Paths. Oct 14 06:04:14 localhost systemd[304964]: Stopped target Sockets. Oct 14 06:04:14 localhost systemd[304964]: Stopped target Timers. Oct 14 06:04:14 localhost systemd[304964]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 14 06:04:14 localhost systemd[304964]: Stopped Daily Cleanup of User's Temporary Directories. Oct 14 06:04:14 localhost systemd[304964]: Closed D-Bus User Message Bus Socket. Oct 14 06:04:14 localhost systemd[304964]: Stopped Create User's Volatile Files and Directories. Oct 14 06:04:14 localhost systemd[304964]: Removed slice User Application Slice. Oct 14 06:04:14 localhost systemd[304964]: Reached target Shutdown. Oct 14 06:04:14 localhost systemd[304964]: Finished Exit the Session. Oct 14 06:04:14 localhost systemd[304964]: Reached target Exit the Session. Oct 14 06:04:14 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 14 06:04:14 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 14 06:04:14 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 14 06:04:14 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 14 06:04:14 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 14 06:04:14 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 14 06:04:14 localhost systemd[1]: user-1003.slice: Consumed 2.438s CPU time. Oct 14 06:04:14 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 14 06:04:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:04:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:04:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:04:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:04:14 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:14 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:14 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:14 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:14 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.931 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:04:14 localhost nova_compute[295778]: 2025-10-14 10:04:14.932 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:04:15 localhost ceph-mon[307093]: mon.np0005486731@3(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:04:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/268938072' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.378 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.564 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.565 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12258MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.565 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.566 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.640 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.640 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:04:15 localhost nova_compute[295778]: 2025-10-14 10:04:15.665 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:04:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:15 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:04:15 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:04:15 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:04:15 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:04:15 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:04:15 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:04:15 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:15 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:15 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:16 localhost nova_compute[295778]: 2025-10-14 10:04:16.117 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:04:16 localhost nova_compute[295778]: 2025-10-14 10:04:16.124 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:04:16 localhost nova_compute[295778]: 2025-10-14 10:04:16.142 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:04:16 localhost nova_compute[295778]: 2025-10-14 10:04:16.145 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:04:16 localhost nova_compute[295778]: 2025-10-14 10:04:16.145 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:04:16 localhost ceph-mon[307093]: Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:04:16 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:04:16 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:16 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:18 localhost nova_compute[295778]: 2025-10-14 10:04:18.142 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:18 localhost nova_compute[295778]: 2025-10-14 10:04:18.165 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:18 localhost nova_compute[295778]: 2025-10-14 10:04:18.165 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:18 localhost nova_compute[295778]: 2025-10-14 10:04:18.165 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost nova_compute[295778]: 2025-10-14 10:04:18.922 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:18 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:18 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:19 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:19 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:19 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:04:19 localhost ceph-mon[307093]: Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Removing np0005486729.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:04:19 localhost ceph-mon[307093]: Removing np0005486729.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:04:19 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:19 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:19 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:19 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 85128418-ed41-4df9-8ab0-469e92183dbb (Updating mgr deployment (-1 -> 4)) Oct 14 06:04:19 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005486729.xpybho from np0005486729.localdomain -- ports [8765] Oct 14 06:04:19 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005486729.xpybho from np0005486729.localdomain -- ports [8765] Oct 14 06:04:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.928 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.929 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.929 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:04:19 localhost nova_compute[295778]: 2025-10-14 10:04:19.929 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:04:20 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:20 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:20 localhost ceph-mon[307093]: Removing daemon mgr.np0005486729.xpybho from np0005486729.localdomain -- ports [8765] Oct 14 06:04:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:21 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005486729.xpybho Oct 14 06:04:21 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005486729.xpybho Oct 14 06:04:22 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 85128418-ed41-4df9-8ab0-469e92183dbb (Updating mgr deployment (-1 -> 4)) Oct 14 06:04:22 localhost ceph-mgr[300442]: [progress INFO root] Completed event 85128418-ed41-4df9-8ab0-469e92183dbb (Updating mgr deployment (-1 -> 4)) in 2 seconds Oct 14 06:04:22 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev e287b7db-2460-4f01-bcda-1c5165358e33 (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:04:22 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev e287b7db-2460-4f01-bcda-1c5165358e33 (Updating node-proxy deployment (+5 -> 5)) Oct 14 06:04:22 localhost ceph-mgr[300442]: [progress INFO root] Completed event e287b7db-2460-4f01-bcda-1c5165358e33 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 14 06:04:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.26784 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005486729.localdomain", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:22 localhost ceph-mgr[300442]: [cephadm INFO root] Added label _no_schedule to host np0005486729.localdomain Oct 14 06:04:22 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005486729.localdomain Oct 14 06:04:22 localhost ceph-mgr[300442]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005486729.localdomain Oct 14 06:04:22 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005486729.localdomain Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "mgr.np0005486729.xpybho"} : dispatch Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005486729.xpybho"}]': finished Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:22 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34350 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005486729.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 14 06:04:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:23 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev c9ca88dd-d063-406d-9261-98e7a139ddb1 (Updating crash deployment (-1 -> 4)) Oct 14 06:04:23 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing daemon crash.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:23 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing daemon crash.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:23 localhost ceph-mon[307093]: Removing key for mgr.np0005486729.xpybho Oct 14 06:04:23 localhost ceph-mon[307093]: Added label _no_schedule to host np0005486729.localdomain Oct 14 06:04:23 localhost ceph-mon[307093]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005486729.localdomain Oct 14 06:04:23 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:23 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:23 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:04:23 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:24 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:24 localhost ceph-mon[307093]: Removing daemon crash.np0005486729 from np0005486729.localdomain -- ports [] Oct 14 06:04:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.44247 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005486729.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:25 localhost ceph-mgr[300442]: [cephadm INFO root] Removed host np0005486729.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removed host np0005486729.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Removing key for client.crash.np0005486729.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing key for client.crash.np0005486729.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev c9ca88dd-d063-406d-9261-98e7a139ddb1 (Updating crash deployment (-1 -> 4)) Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] Completed event c9ca88dd-d063-406d-9261-98e7a139ddb1 (Updating crash deployment (-1 -> 4)) in 1 seconds Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 115cc170-f503-4951-bb25-6a1973b36c03 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 115cc170-f503-4951-bb25-6a1973b36c03 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] Completed event 115cc170-f503-4951-bb25-6a1973b36c03 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 517b13d8-de78-4422-a0d6-8969410bf19b (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 517b13d8-de78-4422-a0d6-8969410bf19b (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] Completed event 517b13d8-de78-4422-a0d6-8969410bf19b (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:04:25 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:04:25 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:25 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:25 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain"} : dispatch Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain"}]': finished Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.crash.np0005486729.localdomain"} : dispatch Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005486729.localdomain"}]': finished Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:25 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:04:26 localhost systemd[1]: tmp-crun.AiyEZl.mount: Deactivated successfully. Oct 14 06:04:26 localhost podman[309841]: 2025-10-14 10:04:26.548925021 +0000 UTC m=+0.087456436 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:04:26 localhost podman[309841]: 2025-10-14 10:04:26.564149039 +0000 UTC m=+0.102680434 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Oct 14 06:04:26 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:26 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:26 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:26 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:26 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:04:26 localhost ceph-mon[307093]: Removed host np0005486729.localdomain Oct 14 06:04:26 localhost ceph-mon[307093]: Removing key for client.crash.np0005486729.localdomain Oct 14 06:04:26 localhost ceph-mon[307093]: Reconfiguring mon.np0005486730 (monmap changed)... Oct 14 06:04:26 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:04:26 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:26 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:26 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:27 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:27 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:27 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:27 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:27 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:27 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:27 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:27 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:27 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:28 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:28 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:28 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:28 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:28 localhost ceph-mon[307093]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:28 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:28 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:28 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:28 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:28 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:28 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:29 localhost podman[309914]: Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:29.014951913 +0000 UTC m=+0.101430060 container create 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True) Oct 14 06:04:29 localhost systemd[1]: Started libpod-conmon-78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981.scope. Oct 14 06:04:29 localhost systemd[1]: Started libcrun container. Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:28.983616133 +0000 UTC m=+0.070094320 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:29.087360415 +0000 UTC m=+0.173838562 container init 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, vendor=Red Hat, Inc., version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main) Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:29.099936802 +0000 UTC m=+0.186414959 container start 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:29.100236881 +0000 UTC m=+0.186715078 container attach 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph) Oct 14 06:04:29 localhost competent_solomon[309929]: 167 167 Oct 14 06:04:29 localhost systemd[1]: libpod-78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981.scope: Deactivated successfully. Oct 14 06:04:29 localhost podman[309914]: 2025-10-14 10:04:29.104213647 +0000 UTC m=+0.190691844 container died 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Oct 14 06:04:29 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:29 localhost podman[309934]: 2025-10-14 10:04:29.205616366 +0000 UTC m=+0.087814235 container remove 78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_solomon, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:04:29 localhost systemd[1]: libpod-conmon-78a00feb566bdce7d3b564d651fbbfa1826358f353e52553bb49334772993981.scope: Deactivated successfully. Oct 14 06:04:29 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:29 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:29 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:29 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:29 localhost podman[310003]: Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.909071868 +0000 UTC m=+0.077540220 container create 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux ) Oct 14 06:04:29 localhost systemd[1]: Started libpod-conmon-8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622.scope. Oct 14 06:04:29 localhost systemd[1]: Started libcrun container. Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.966060126 +0000 UTC m=+0.134528488 container init 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.975014296 +0000 UTC m=+0.143482648 container start 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, GIT_BRANCH=main, release=553, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, build-date=2025-09-24T08:57:55, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.975244742 +0000 UTC m=+0.143713144 container attach 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., release=553, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.878327783 +0000 UTC m=+0.046796185 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:29 localhost vigorous_cohen[310018]: 167 167 Oct 14 06:04:29 localhost systemd[1]: libpod-8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622.scope: Deactivated successfully. Oct 14 06:04:29 localhost podman[310003]: 2025-10-14 10:04:29.979820135 +0000 UTC m=+0.148288517 container died 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, release=553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:04:30 localhost systemd[1]: var-lib-containers-storage-overlay-30c4148662cef41a0ccb2aa2b94f0781545805cbb4a6827cbcfc43b47bdb6545-merged.mount: Deactivated successfully. Oct 14 06:04:30 localhost systemd[1]: tmp-crun.wNWuPf.mount: Deactivated successfully. Oct 14 06:04:30 localhost systemd[1]: var-lib-containers-storage-overlay-9d13f888300ef651e0c5eecaa366135aa61714e06c885b51d4440692d3744fc4-merged.mount: Deactivated successfully. Oct 14 06:04:30 localhost podman[310023]: 2025-10-14 10:04:30.077865154 +0000 UTC m=+0.084623780 container remove 8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_cohen, vendor=Red Hat, Inc., ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 14 06:04:30 localhost systemd[1]: libpod-conmon-8e12349a9c0f37a57414000b0666c6d288071b1d0f7602c9c4a1c0c32abaf622.scope: Deactivated successfully. Oct 14 06:04:30 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:30 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:30 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:30 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:30 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:30 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:30 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:30 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:04:30 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:30 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:30 localhost podman[246584]: time="2025-10-14T10:04:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:04:30 localhost podman[246584]: @ - - [14/Oct/2025:10:04:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:04:30 localhost podman[246584]: @ - - [14/Oct/2025:10:04:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18349 "" "Go-http-client/1.1" Oct 14 06:04:30 localhost podman[310097]: Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.856155952 +0000 UTC m=+0.063931604 container create 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:04:30 localhost systemd[1]: Started libpod-conmon-5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e.scope. Oct 14 06:04:30 localhost systemd[1]: Started libcrun container. Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.825715306 +0000 UTC m=+0.033490968 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.935846979 +0000 UTC m=+0.143622631 container init 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, release=553, ceph=True, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=) Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.946913076 +0000 UTC m=+0.154688728 container start 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.947363088 +0000 UTC m=+0.155138740 container attach 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., release=553, name=rhceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:04:30 localhost reverent_taussig[310112]: 167 167 Oct 14 06:04:30 localhost systemd[1]: libpod-5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e.scope: Deactivated successfully. Oct 14 06:04:30 localhost podman[310097]: 2025-10-14 10:04:30.953832191 +0000 UTC m=+0.161607853 container died 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, release=553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:04:31 localhost systemd[1]: tmp-crun.U2x95O.mount: Deactivated successfully. Oct 14 06:04:31 localhost systemd[1]: var-lib-containers-storage-overlay-bfae3c4ce24cd5e4d99012647923307a05a0429367f3c3e5e56b88462de5cc42-merged.mount: Deactivated successfully. Oct 14 06:04:31 localhost podman[310117]: 2025-10-14 10:04:31.049299901 +0000 UTC m=+0.089784308 container remove 5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_taussig, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:04:31 localhost systemd[1]: libpod-conmon-5f983ea00468d1860f9bc29ef50c57d14080a4a4999be2330f16e7cec997007e.scope: Deactivated successfully. Oct 14 06:04:31 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:31 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:31 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:31 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:31 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:04:31 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:31 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:31 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:31 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:04:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:04:31 localhost podman[310190]: 2025-10-14 10:04:31.883768806 +0000 UTC m=+0.091064422 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:04:31 localhost podman[310206]: Oct 14 06:04:31 localhost podman[310206]: 2025-10-14 10:04:31.901155772 +0000 UTC m=+0.077824957 container create 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Oct 14 06:04:31 localhost systemd[1]: Started libpod-conmon-5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f.scope. Oct 14 06:04:31 localhost podman[310190]: 2025-10-14 10:04:31.950528776 +0000 UTC m=+0.157824392 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:04:31 localhost systemd[1]: Started libcrun container. Oct 14 06:04:31 localhost podman[310206]: 2025-10-14 10:04:31.869098053 +0000 UTC m=+0.045767268 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:31 localhost podman[310189]: 2025-10-14 10:04:31.988102574 +0000 UTC m=+0.195948416 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:04:32 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:04:32 localhost podman[310206]: 2025-10-14 10:04:32.015385915 +0000 UTC m=+0.192055090 container init 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:04:32 localhost podman[310189]: 2025-10-14 10:04:32.020982155 +0000 UTC m=+0.228827937 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:04:32 localhost podman[310206]: 2025-10-14 10:04:32.033297766 +0000 UTC m=+0.209966941 container start 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, release=553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, version=7) Oct 14 06:04:32 localhost podman[310206]: 2025-10-14 10:04:32.034545679 +0000 UTC m=+0.211214894 container attach 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7) Oct 14 06:04:32 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:04:32 localhost systemd[1]: libpod-5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f.scope: Deactivated successfully. Oct 14 06:04:32 localhost vibrant_moser[310240]: 167 167 Oct 14 06:04:32 localhost podman[310206]: 2025-10-14 10:04:32.041005482 +0000 UTC m=+0.217674697 container died 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:04:32 localhost systemd[1]: var-lib-containers-storage-overlay-97ae853aac455c50bc4aa9be4f9ad6a42e73baee05096a76eae8e0f8113fa998-merged.mount: Deactivated successfully. Oct 14 06:04:32 localhost podman[310250]: 2025-10-14 10:04:32.137565362 +0000 UTC m=+0.087534518 container remove 5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_moser, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:04:32 localhost systemd[1]: libpod-conmon-5d1d522fd5c6cca79aeb508fa8e80dd51e3c6e01af8925c02d47c3a2e971309f.scope: Deactivated successfully. Oct 14 06:04:32 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:32 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:32 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:32 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:32 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:32 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:32 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:32 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:32 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:32 localhost podman[310320]: Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.847766734 +0000 UTC m=+0.077391635 container create 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12) Oct 14 06:04:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34329 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:32 localhost ceph-mgr[300442]: [cephadm INFO root] Saving service mon spec with placement label:mon Oct 14 06:04:32 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Oct 14 06:04:32 localhost systemd[1]: Started libpod-conmon-08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6.scope. Oct 14 06:04:32 localhost systemd[1]: Started libcrun container. Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.905554424 +0000 UTC m=+0.135179325 container init 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main) Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.914892784 +0000 UTC m=+0.144517735 container start 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, release=553, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main) Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.815453218 +0000 UTC m=+0.045078169 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:32 localhost eager_sinoussi[310335]: 167 167 Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.91511201 +0000 UTC m=+0.144736951 container attach 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main) Oct 14 06:04:32 localhost systemd[1]: libpod-08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6.scope: Deactivated successfully. Oct 14 06:04:32 localhost podman[310320]: 2025-10-14 10:04:32.92144664 +0000 UTC m=+0.151071551 container died 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, release=553, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:04:33 localhost podman[310340]: 2025-10-14 10:04:33.01805575 +0000 UTC m=+0.085486412 container remove 08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_sinoussi, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, release=553, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , RELEASE=main) Oct 14 06:04:33 localhost systemd[1]: libpod-conmon-08d2bc34e3e0abd606b45038926dc26e9e24a29fce513c19d1b53b533c9b9eb6.scope: Deactivated successfully. Oct 14 06:04:33 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 0e11bcc1-caa1-4838-955a-1174529a2382 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:33 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 0e11bcc1-caa1-4838-955a-1174529a2382 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:33 localhost ceph-mgr[300442]: [progress INFO root] Completed event 0e11bcc1-caa1-4838-955a-1174529a2382 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:04:33 localhost openstack_network_exporter[248748]: ERROR 10:04:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:04:33 localhost openstack_network_exporter[248748]: ERROR 10:04:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:04:33 localhost openstack_network_exporter[248748]: ERROR 10:04:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:04:33 localhost openstack_network_exporter[248748]: ERROR 10:04:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:04:33 localhost openstack_network_exporter[248748]: Oct 14 06:04:33 localhost openstack_network_exporter[248748]: ERROR 10:04:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:04:33 localhost openstack_network_exporter[248748]: Oct 14 06:04:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:33 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:33 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:33 localhost ceph-mon[307093]: Saving service mon spec with placement label:mon Oct 14 06:04:33 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:33 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:33 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:33 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:04:33 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34337 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005486732", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 14 06:04:34 localhost ceph-mon[307093]: mon.np0005486731@3(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34345 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005486732"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:35 localhost ceph-mgr[300442]: [cephadm INFO root] Remove daemons mon.np0005486732 Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005486732 Oct 14 06:04:35 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005486732: new quorum should be ['np0005486730', 'np0005486733', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486731']) Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005486732: new quorum should be ['np0005486730', 'np0005486733', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486731']) Oct 14 06:04:35 localhost ceph-mgr[300442]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005486732 from monmap... Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing monitor np0005486732 from monmap... Oct 14 06:04:35 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005486732 from np0005486732.localdomain -- ports [] Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005486732 from np0005486732.localdomain -- ports [] Oct 14 06:04:35 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:04:35 localhost ceph-mon[307093]: mon.np0005486731@3(peon) e11 my rank is now 2 (was 3) Oct 14 06:04:35 localhost ceph-mgr[300442]: client.34273 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:04:35 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:04:35 localhost ceph-mon[307093]: paxos.2).electionLogic(42) init, last seen epoch 42 Oct 14 06:04:35 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:35 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:38 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:04:38 Oct 14 06:04:38 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:04:38 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:04:38 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_data', 'backups', '.mgr', 'images', 'vms', 'manila_metadata', 'volumes'] Oct 14 06:04:38 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:04:38 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:04:38 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:04:38 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:04:39 localhost podman[310377]: 2025-10-14 10:04:39.555271795 +0000 UTC m=+0.087218151 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:04:39 localhost podman[310377]: 2025-10-14 10:04:39.592830952 +0000 UTC m=+0.124777298 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:04:39 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:04:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:40 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:40 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:41 localhost ceph-mon[307093]: paxos.2).electionLogic(43) init, last seen epoch 43, mid-election, bumping Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:41 localhost ceph-mon[307093]: Remove daemons mon.np0005486732 Oct 14 06:04:41 localhost ceph-mon[307093]: Safe to remove mon.np0005486732: new quorum should be ['np0005486730', 'np0005486733', 'np0005486731'] (from ['np0005486730', 'np0005486733', 'np0005486731']) Oct 14 06:04:41 localhost ceph-mon[307093]: Removing monitor np0005486732 from monmap... Oct 14 06:04:41 localhost ceph-mon[307093]: Removing daemon mon.np0005486732 from np0005486732.localdomain -- ports [] Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486733 in quorum (ranks 0,1) Oct 14 06:04:41 localhost ceph-mon[307093]: Health check failed: 1/3 mons down, quorum np0005486730,np0005486733 (MON_DOWN) Oct 14 06:04:41 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005486730,np0005486733 Oct 14 06:04:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:04:41 localhost ceph-mon[307093]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005486730,np0005486733 Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731 (rank 2) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Oct 14 06:04:41 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:41 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:04:41 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486733,np0005486731 in quorum (ranks 0,1,2) Oct 14 06:04:41 localhost ceph-mon[307093]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005486730,np0005486733) Oct 14 06:04:41 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:04:41 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:04:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:04:41 localhost podman[310646]: 2025-10-14 10:04:41.783263165 +0000 UTC m=+0.080308474 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible) Oct 14 06:04:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:41 localhost podman[310646]: 2025-10-14 10:04:41.819209469 +0000 UTC m=+0.116254748 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:04:41 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:04:42 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 568e220b-6763-4024-a24a-cff21045acd4 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:42 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 568e220b-6763-4024-a24a-cff21045acd4 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:04:42 localhost ceph-mgr[300442]: [progress INFO root] Completed event 568e220b-6763-4024-a24a-cff21045acd4 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:04:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:42 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:42 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:42 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:42 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:42 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:42 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:42 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:43 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:43 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:43 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:04:43 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:04:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:43 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:44 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:44 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:44 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:04:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:04:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:04:44 localhost podman[310773]: 2025-10-14 10:04:44.34970497 +0000 UTC m=+0.089116090 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:04:44 localhost podman[310773]: 2025-10-14 10:04:44.367121967 +0000 UTC m=+0.106533137 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350) Oct 14 06:04:44 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:04:44 localhost systemd[1]: tmp-crun.kxcWi1.mount: Deactivated successfully. Oct 14 06:04:44 localhost podman[310775]: 2025-10-14 10:04:44.468016182 +0000 UTC m=+0.201677118 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:04:44 localhost podman[310778]: 2025-10-14 10:04:44.429350077 +0000 UTC m=+0.160032363 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:04:44 localhost podman[310778]: 2025-10-14 10:04:44.513111542 +0000 UTC m=+0.243793838 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:04:44 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:04:44 localhost podman[310775]: 2025-10-14 10:04:44.533226512 +0000 UTC m=+0.266887408 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:04:44 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:04:44 localhost podman[310877]: Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.777478481 +0000 UTC m=+0.074668584 container create ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:04:44 localhost systemd[1]: Started libpod-conmon-ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39.scope. Oct 14 06:04:44 localhost systemd[1]: Started libcrun container. Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.840182282 +0000 UTC m=+0.137372405 container init ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public) Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.748898425 +0000 UTC m=+0.046088578 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.85017985 +0000 UTC m=+0.147369983 container start ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, name=rhceph, vendor=Red Hat, Inc.) Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.851870765 +0000 UTC m=+0.149060898 container attach ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, vcs-type=git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:04:44 localhost loving_mclean[310892]: 167 167 Oct 14 06:04:44 localhost systemd[1]: libpod-ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39.scope: Deactivated successfully. Oct 14 06:04:44 localhost podman[310877]: 2025-10-14 10:04:44.856194451 +0000 UTC m=+0.153384614 container died ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, name=rhceph, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 14 06:04:44 localhost ceph-mon[307093]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:04:44 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:04:44 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:44 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:44 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:44 localhost podman[310898]: 2025-10-14 10:04:44.959908632 +0000 UTC m=+0.090602530 container remove ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_mclean, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:04:44 localhost systemd[1]: libpod-conmon-ec47e452f35c05bab18eebe5c4375d11622c4f3f1695417f5da73f98a1a4df39.scope: Deactivated successfully. Oct 14 06:04:45 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:45 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:45 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:45 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:45 localhost systemd[1]: tmp-crun.bMPCKR.mount: Deactivated successfully. Oct 14 06:04:45 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:04:45 localhost podman[310966]: Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.655538004 +0000 UTC m=+0.059049404 container create d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 14 06:04:45 localhost systemd[1]: Started libpod-conmon-d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b.scope. Oct 14 06:04:45 localhost systemd[1]: Started libcrun container. Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.711601827 +0000 UTC m=+0.115113237 container init d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.725064538 +0000 UTC m=+0.128575978 container start d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.725377227 +0000 UTC m=+0.128888707 container attach d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:04:45 localhost angry_williamson[310981]: 167 167 Oct 14 06:04:45 localhost systemd[1]: libpod-d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b.scope: Deactivated successfully. Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.7307105 +0000 UTC m=+0.134221970 container died d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Oct 14 06:04:45 localhost podman[310966]: 2025-10-14 10:04:45.639610698 +0000 UTC m=+0.043122108 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:45 localhost podman[310987]: 2025-10-14 10:04:45.827426684 +0000 UTC m=+0.088127395 container remove d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williamson, GIT_BRANCH=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64) Oct 14 06:04:45 localhost systemd[1]: libpod-conmon-d316cd40f530d940154254c5232ef471f23e39c10c8c3cb23cc3464e7cd0b05b.scope: Deactivated successfully. Oct 14 06:04:45 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:04:45 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:04:45 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:45 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:45 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:04:45 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:46 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:46 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:46 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:46 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:46 localhost systemd[1]: tmp-crun.WwuMJt.mount: Deactivated successfully. Oct 14 06:04:46 localhost systemd[1]: var-lib-containers-storage-overlay-8954f23181b271c08d0ed413cf4386493db985eee876939603dd3069708b2020-merged.mount: Deactivated successfully. Oct 14 06:04:46 localhost podman[311065]: Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.66458093 +0000 UTC m=+0.077009106 container create 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, ceph=True) Oct 14 06:04:46 localhost systemd[1]: Started libpod-conmon-0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69.scope. Oct 14 06:04:46 localhost systemd[1]: Started libcrun container. Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.731127954 +0000 UTC m=+0.143556130 container init 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, build-date=2025-09-24T08:57:55) Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.633693562 +0000 UTC m=+0.046121758 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.739995462 +0000 UTC m=+0.152423628 container start 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.740226588 +0000 UTC m=+0.152654804 container attach 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 14 06:04:46 localhost intelligent_matsumoto[311080]: 167 167 Oct 14 06:04:46 localhost systemd[1]: libpod-0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69.scope: Deactivated successfully. Oct 14 06:04:46 localhost podman[311065]: 2025-10-14 10:04:46.742053097 +0000 UTC m=+0.154481293 container died 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:04:46 localhost podman[311085]: 2025-10-14 10:04:46.83388687 +0000 UTC m=+0.077631703 container remove 0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_matsumoto, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, com.redhat.component=rhceph-container) Oct 14 06:04:46 localhost systemd[1]: libpod-conmon-0aa74143ba1f1258cdbd9769c552fbf7b39853778c9e7c0bd1c91d90c6263a69.scope: Deactivated successfully. Oct 14 06:04:46 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:04:46 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:04:46 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:46 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:46 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:04:46 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:04:46 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:04:47 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:47 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:47 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:47 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:47 localhost systemd[1]: var-lib-containers-storage-overlay-87a4d37dba064e6edba65d96ef037ccd4f84d533b1423e811600c24e3dc21e22-merged.mount: Deactivated successfully. Oct 14 06:04:47 localhost podman[311161]: Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.73232111 +0000 UTC m=+0.074191821 container create 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Oct 14 06:04:47 localhost systemd[1]: Started libpod-conmon-70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c.scope. Oct 14 06:04:47 localhost systemd[1]: Started libcrun container. Oct 14 06:04:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.792670138 +0000 UTC m=+0.134540849 container init 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, architecture=x86_64) Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.701704769 +0000 UTC m=+0.043575510 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.801387122 +0000 UTC m=+0.143257843 container start 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph) Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.801810153 +0000 UTC m=+0.143680864 container attach 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Oct 14 06:04:47 localhost wizardly_cori[311177]: 167 167 Oct 14 06:04:47 localhost systemd[1]: libpod-70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c.scope: Deactivated successfully. Oct 14 06:04:47 localhost podman[311161]: 2025-10-14 10:04:47.804793373 +0000 UTC m=+0.146664084 container died 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-type=git, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:04:47 localhost podman[311182]: 2025-10-14 10:04:47.895156146 +0000 UTC m=+0.080174111 container remove 70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_cori, vcs-type=git, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:04:47 localhost systemd[1]: libpod-conmon-70a2a6ac9347f7f1622df5b354f846c7b61780a2123e9190ab93c1d6645b891c.scope: Deactivated successfully. Oct 14 06:04:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:48 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:48 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:48 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:48 localhost systemd[1]: var-lib-containers-storage-overlay-ceb2af0d57d7721c6372f3649f98229f1c83740f4882e7bc293cacb37c7965a3-merged.mount: Deactivated successfully. Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34354 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005486732.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:04:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:48 localhost podman[311250]: Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.622808236 +0000 UTC m=+0.073712907 container create 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Oct 14 06:04:48 localhost systemd[1]: Started libpod-conmon-92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa.scope. Oct 14 06:04:48 localhost systemd[1]: Started libcrun container. Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.683208576 +0000 UTC m=+0.134113247 container init 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, version=7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.691449577 +0000 UTC m=+0.142354248 container start 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=553, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.691672283 +0000 UTC m=+0.142576984 container attach 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, release=553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:04:48 localhost suspicious_almeida[311265]: 167 167 Oct 14 06:04:48 localhost systemd[1]: libpod-92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa.scope: Deactivated successfully. Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.593807999 +0000 UTC m=+0.044712720 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:04:48 localhost podman[311250]: 2025-10-14 10:04:48.694991172 +0000 UTC m=+0.145895863 container died 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, GIT_CLEAN=True, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Oct 14 06:04:48 localhost podman[311270]: 2025-10-14 10:04:48.786761553 +0000 UTC m=+0.080342305 container remove 92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_almeida, GIT_BRANCH=main, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:04:48 localhost systemd[1]: libpod-conmon-92f16987661831ee948cc44ec38b2d358e336a32cb984d06163b8864d9712bfa.scope: Deactivated successfully. Oct 14 06:04:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:48 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:48 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:49 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:49 localhost systemd[1]: var-lib-containers-storage-overlay-386df41e5f5d4029ba9229603c54ce2a26303f31cd8f6618b798f0c51d465139-merged.mount: Deactivated successfully. Oct 14 06:04:49 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:04:49 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:04:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:04:49 localhost ceph-mon[307093]: Deploying daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:04:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:49 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:04:49 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:04:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:04:50 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:04:51 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:51 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:51 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:51 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:51 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:51 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:51 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:51 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:51 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:52 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:04:52 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:04:52 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:04:52 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:52 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:52 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:52 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:52 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:52 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:53 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:53 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:53 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:04:53 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:04:53 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:04:53 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:53 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:53 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:53 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:53 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:53 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:53 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:54 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:54 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:54 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:54 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:54 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:54 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:54 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:54 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:54 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:54 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:04:54 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:54 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:04:54 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.613348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295613391, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2171, "num_deletes": 251, "total_data_size": 3859602, "memory_usage": 3909016, "flush_reason": "Manual Compaction"} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295627926, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 2031772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12449, "largest_seqno": 14615, "table_properties": {"data_size": 2023019, "index_size": 5006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 24514, "raw_average_key_size": 23, "raw_value_size": 2003209, "raw_average_value_size": 1884, "num_data_blocks": 219, "num_entries": 1063, "num_filter_entries": 1063, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436245, "oldest_key_time": 1760436245, "file_creation_time": 1760436295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 14654 microseconds, and 5501 cpu microseconds. Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.628001) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 2031772 bytes OK Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.628028) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.632211) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.632233) EVENT_LOG_v1 {"time_micros": 1760436295632226, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.632255) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 3848750, prev total WAL file size 3849355, number of live WAL files 2. Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.636148) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373535' seq:72057594037927935, type:22 .. '6D6772737461740034303036' seq:0, type:0; will stop at (end) Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1984KB)], [18(15MB)] Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295636195, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17856715, "oldest_snapshot_seqno": -1} Oct 14 06:04:55 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:55 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:55 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:55 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10898 keys, 15796361 bytes, temperature: kUnknown Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295707840, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 15796361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15733732, "index_size": 34304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 290876, "raw_average_key_size": 26, "raw_value_size": 15547215, "raw_average_value_size": 1426, "num_data_blocks": 1310, "num_entries": 10898, "num_filter_entries": 10898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.708208) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15796361 bytes Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.709860) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.8 rd, 220.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 15.1 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(16.6) write-amplify(7.8) OK, records in: 11420, records dropped: 522 output_compression: NoCompression Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.709894) EVENT_LOG_v1 {"time_micros": 1760436295709877, "job": 8, "event": "compaction_finished", "compaction_time_micros": 71778, "compaction_time_cpu_micros": 47158, "output_level": 6, "num_output_files": 1, "total_output_size": 15796361, "num_input_records": 11420, "num_output_records": 10898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295710372, "job": 8, "event": "table_file_deletion", "file_number": 20} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436295712972, "job": 8, "event": "table_file_deletion", "file_number": 18} Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.636071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.713083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.713089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.713092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.713095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:04:55.713098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:04:55 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:55 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:55 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:55 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:04:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:04:55 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:04:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:55 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:04:55 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:56 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:56 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:56 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:56 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:56 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:56 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:56 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:04:56 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:04:56 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:56 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:56 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:04:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:04:57 localhost podman[311287]: 2025-10-14 10:04:57.547712835 +0000 UTC m=+0.085495194 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:04:57 localhost podman[311287]: 2025-10-14 10:04:57.562118481 +0000 UTC m=+0.099900870 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:04:57 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:04:57 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:57 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:57 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:57 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:04:57.629 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:04:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:04:57.631 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:04:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:04:57.631 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:04:57 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:57 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:57 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:57 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:04:57 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:04:57 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:57 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:57 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:04:58 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:58 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:58 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:58 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:58 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:58 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:58 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:04:58 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:04:58 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:58 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:58 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:04:59 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:04:59 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:04:59 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:04:59 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:04:59 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:04:59 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:04:59 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:04:59 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:04:59 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:04:59 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:04:59 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:04:59 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:59 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:04:59 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:00 localhost podman[246584]: time="2025-10-14T10:05:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:05:00 localhost podman[246584]: @ - - [14/Oct/2025:10:05:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:05:00 localhost podman[246584]: @ - - [14/Oct/2025:10:05:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18343 "" "Go-http-client/1.1" Oct 14 06:05:00 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:00 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:05:00 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:05:00 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:05:00 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:00 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:01 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:01 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:05:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:05:01 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:05:02 localhost podman[311373]: 2025-10-14 10:05:02.555590414 +0000 UTC m=+0.091423033 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent) Oct 14 06:05:02 localhost podman[311373]: 2025-10-14 10:05:02.565130769 +0000 UTC m=+0.100963418 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:05:02 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:05:02 localhost systemd[1]: tmp-crun.Ubo7mN.mount: Deactivated successfully. Oct 14 06:05:02 localhost podman[311374]: 2025-10-14 10:05:02.659267104 +0000 UTC m=+0.189448102 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:05:02 localhost podman[311374]: 2025-10-14 10:05:02.672237442 +0000 UTC m=+0.202418410 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:05:02 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:05:02 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:02 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:05:02 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:02 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:03 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 0b071ba1-84ed-4035-a352-025398baf893 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:05:03 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 0b071ba1-84ed-4035-a352-025398baf893 (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:05:03 localhost ceph-mgr[300442]: [progress INFO root] Completed event 0b071ba1-84ed-4035-a352-025398baf893 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:05:03 localhost openstack_network_exporter[248748]: ERROR 10:05:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:05:03 localhost openstack_network_exporter[248748]: ERROR 10:05:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:05:03 localhost openstack_network_exporter[248748]: ERROR 10:05:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:05:03 localhost openstack_network_exporter[248748]: ERROR 10:05:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:05:03 localhost openstack_network_exporter[248748]: Oct 14 06:05:03 localhost openstack_network_exporter[248748]: ERROR 10:05:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:05:03 localhost openstack_network_exporter[248748]: Oct 14 06:05:03 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:03 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (2) No such file or directory Oct 14 06:05:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:05:03 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:05:03 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 14 06:05:03 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:03 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:03 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:03 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:03 localhost ceph-mon[307093]: paxos.2).electionLogic(48) init, last seen epoch 48 Oct 14 06:05:03 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:03 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:03 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:04 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:04 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:05 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:05:05 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:05 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:05:06 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:06 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:07 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:07 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:05:08 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 handle_auth_request failed to assign global_id Oct 14 06:05:08 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 handle_auth_request failed to assign global_id Oct 14 06:05:08 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:08 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 14 06:05:08 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:05:08 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 handle_auth_request failed to assign global_id Oct 14 06:05:08 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486733,np0005486731 in quorum (ranks 0,1,2) Oct 14 06:05:09 localhost ceph-mon[307093]: Health check failed: 1/4 mons down, quorum np0005486730,np0005486733,np0005486731 (MON_DOWN) Oct 14 06:05:09 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/4 mons down, quorum np0005486730,np0005486733,np0005486731 Oct 14 06:05:09 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 14 06:05:09 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:09 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:05:09 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:09 localhost ceph-mon[307093]: [WRN] MON_DOWN: 1/4 mons down, quorum np0005486730,np0005486733,np0005486731 Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486732 (rank 3) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Oct 14 06:05:09 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:09 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:09 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:09 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:05:10 localhost podman[311432]: 2025-10-14 10:05:10.534442235 +0000 UTC m=+0.077417462 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Oct 14 06:05:10 localhost podman[311432]: 2025-10-14 10:05:10.571553283 +0000 UTC m=+0.114528520 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:05:10 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.34369 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Oct 14 06:05:10 localhost ceph-mgr[300442]: [cephadm INFO root] Reconfig service osd.default_drive_group Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Oct 14 06:05:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:10 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:10 localhost ceph-mgr[300442]: mgr finish mon failed to return metadata for mon.np0005486732: (22) Invalid argument Oct 14 06:05:10 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:10 localhost ceph-mon[307093]: paxos.2).electionLogic(50) init, last seen epoch 50 Oct 14 06:05:10 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:10 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:10 localhost ceph-mon[307093]: mon.np0005486731@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:10 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:11 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mon.np0005486732 172.18.0.107:0/391083985; not ready for session (expect reconnect) Oct 14 06:05:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:05:11 localhost ceph-mon[307093]: Reconfig service osd.default_drive_group Oct 14 06:05:11 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:11 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:11 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:11 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:11 localhost ceph-mon[307093]: mon.np0005486730 is new leader, mons np0005486730,np0005486733,np0005486731,np0005486732 in quorum (ranks 0,1,2,3) Oct 14 06:05:11 localhost ceph-mon[307093]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005486730,np0005486733,np0005486731) Oct 14 06:05:11 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:05:11 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 14 06:05:11 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:11 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 14 06:05:11 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:05:11 localhost podman[311735]: 2025-10-14 10:05:11.94039869 +0000 UTC m=+0.079295461 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:05:11 localhost podman[311735]: 2025-10-14 10:05:11.977002135 +0000 UTC m=+0.115898896 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:05:11 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.114188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312114232, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 251, "total_data_size": 2223091, "memory_usage": 2253472, "flush_reason": "Manual Compaction"} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312122873, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1351053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14621, "largest_seqno": 15509, "table_properties": {"data_size": 1346673, "index_size": 1979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11833, "raw_average_key_size": 22, "raw_value_size": 1337084, "raw_average_value_size": 2494, "num_data_blocks": 84, "num_entries": 536, "num_filter_entries": 536, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436295, "oldest_key_time": 1760436295, "file_creation_time": 1760436312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 8739 microseconds, and 4742 cpu microseconds. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.122925) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1351053 bytes OK Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.122950) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.124692) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.124714) EVENT_LOG_v1 {"time_micros": 1760436312124707, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.124773) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2218178, prev total WAL file size 2218470, number of live WAL files 2. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.128156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1319KB)], [21(15MB)] Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312128209, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 17147414, "oldest_snapshot_seqno": -1} Oct 14 06:05:12 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 560a0912-56c1-40ed-bbaa-d21bd19d11ae (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:05:12 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 560a0912-56c1-40ed-bbaa-d21bd19d11ae (Updating node-proxy deployment (+4 -> 4)) Oct 14 06:05:12 localhost ceph-mgr[300442]: [progress INFO root] Completed event 560a0912-56c1-40ed-bbaa-d21bd19d11ae (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10897 keys, 13994401 bytes, temperature: kUnknown Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312208320, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 13994401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13933575, "index_size": 32501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 291880, "raw_average_key_size": 26, "raw_value_size": 13748816, "raw_average_value_size": 1261, "num_data_blocks": 1231, "num_entries": 10897, "num_filter_entries": 10897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.208568) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13994401 bytes Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.210139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.9 rd, 174.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 15.1 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(23.1) write-amplify(10.4) OK, records in: 11434, records dropped: 537 output_compression: NoCompression Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.210170) EVENT_LOG_v1 {"time_micros": 1760436312210156, "job": 10, "event": "compaction_finished", "compaction_time_micros": 80181, "compaction_time_cpu_micros": 40632, "output_level": 6, "num_output_files": 1, "total_output_size": 13994401, "num_input_records": 11434, "num_output_records": 10897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312210474, "job": 10, "event": "table_file_deletion", "file_number": 23} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436312212680, "job": 10, "event": "table_file_deletion", "file_number": 21} Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.128048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.212861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.212874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.212879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.212884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:12.212888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:12 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 e82: 6 total, 6 up, 6 in Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr handle_mgr_map I was active but no longer am Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn e: '/usr/bin/ceph-mgr' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 0: '/usr/bin/ceph-mgr' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 1: '-n' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 2: 'mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 3: '-f' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 4: '--setuser' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 5: 'ceph' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 6: '--setgroup' Oct 14 06:05:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:12.301+0000 7fb2b6ec8640 -1 mgr handle_mgr_map I was active but no longer am Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 7: 'ceph' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 8: '--default-log-to-file=false' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 9: '--default-log-to-journald=true' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn 10: '--default-log-to-stderr=false' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn respawning with exe /usr/bin/ceph-mgr Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr respawn exe_path /proc/self/exe Oct 14 06:05:12 localhost systemd[1]: session-70.scope: Deactivated successfully. Oct 14 06:05:12 localhost systemd[1]: session-70.scope: Consumed 21.905s CPU time. Oct 14 06:05:12 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Oct 14 06:05:12 localhost systemd-logind[760]: Removed session 70. Oct 14 06:05:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: ignoring --setuser ceph since I am not root Oct 14 06:05:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: ignoring --setgroup ceph since I am not root Oct 14 06:05:12 localhost ceph-mgr[300442]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 14 06:05:12 localhost ceph-mgr[300442]: pidfile_write: ignore empty --pid-file Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr[py] Loading python module 'alerts' Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr[py] Loading python module 'balancer' Oct 14 06:05:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:12.475+0000 7ff66a216140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 14 06:05:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:12.540+0000 7ff66a216140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 14 06:05:12 localhost ceph-mgr[300442]: mgr[py] Loading python module 'cephadm' Oct 14 06:05:12 localhost sshd[311829]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:05:12 localhost systemd-logind[760]: New session 71 of user ceph-admin. Oct 14 06:05:12 localhost systemd[1]: Started Session 71 of User ceph-admin. Oct 14 06:05:12 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:12 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:12 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:12 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17397 172.18.0.106:0/541940411' entity='mgr.np0005486731.swasqz' Oct 14 06:05:12 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: Activating manager daemon np0005486732.pasqzz Oct 14 06:05:12 localhost ceph-mon[307093]: from='client.? 172.18.0.200:0/216892054' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:05:12 localhost ceph-mon[307093]: Manager daemon np0005486732.pasqzz is now available Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"}]': finished Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486729.localdomain.devices.0"}]': finished Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486732.pasqzz/mirror_snapshot_schedule"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486732.pasqzz/mirror_snapshot_schedule"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486732.pasqzz/trash_purge_schedule"} : dispatch Oct 14 06:05:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486732.pasqzz/trash_purge_schedule"} : dispatch Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'crash' Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:13.216+0000 7ff66a216140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'dashboard' Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'devicehealth' Oct 14 06:05:13 localhost systemd[1]: tmp-crun.w2iojZ.mount: Deactivated successfully. Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'diskprediction_local' Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:13.756+0000 7ff66a216140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost podman[311943]: 2025-10-14 10:05:13.761558792 +0000 UTC m=+0.115880636 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, name=rhceph) Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: from numpy import show_config as show_numpy_config Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:13.884+0000 7ff66a216140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'influx' Oct 14 06:05:13 localhost ceph-mon[307093]: removing stray HostCache host record np0005486729.localdomain.devices.0 Oct 14 06:05:13 localhost podman[311943]: 2025-10-14 10:05:13.901420929 +0000 UTC m=+0.255742743 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, release=553, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.component=rhceph-container) Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 14 06:05:13 localhost ceph-mgr[300442]: mgr[py] Loading python module 'insights' Oct 14 06:05:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:13.945+0000 7ff66a216140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'iostat' Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'k8sevents' Oct 14 06:05:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:14.063+0000 7ff66a216140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'localpool' Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'mds_autoscaler' Oct 14 06:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'mirroring' Oct 14 06:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:05:14 localhost podman[312048]: 2025-10-14 10:05:14.559129089 +0000 UTC m=+0.114167411 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=) Oct 14 06:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'nfs' Oct 14 06:05:14 localhost podman[312081]: 2025-10-14 10:05:14.651829782 +0000 UTC m=+0.089774797 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:05:14 localhost podman[312081]: 2025-10-14 10:05:14.662990097 +0000 UTC m=+0.100935142 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:05:14 localhost podman[312048]: 2025-10-14 10:05:14.67146633 +0000 UTC m=+0.226504572 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc.) Oct 14 06:05:14 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:05:14 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:05:14 localhost podman[312084]: 2025-10-14 10:05:14.729222973 +0000 UTC m=+0.154913126 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:05:14 localhost podman[312084]: 2025-10-14 10:05:14.762444158 +0000 UTC m=+0.188134281 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:05:14 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:14.780+0000 7ff66a216140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'orchestrator' Oct 14 06:05:14 localhost ceph-mon[307093]: [14/Oct/2025:10:05:13] ENGINE Bus STARTING Oct 14 06:05:14 localhost ceph-mon[307093]: [14/Oct/2025:10:05:14] ENGINE Serving on http://172.18.0.107:8765 Oct 14 06:05:14 localhost ceph-mon[307093]: [14/Oct/2025:10:05:14] ENGINE Serving on https://172.18.0.107:7150 Oct 14 06:05:14 localhost ceph-mon[307093]: [14/Oct/2025:10:05:14] ENGINE Bus STARTED Oct 14 06:05:14 localhost ceph-mon[307093]: [14/Oct/2025:10:05:14] ENGINE Client ('172.18.0.107', 44626) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:05:14 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 14 06:05:14 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 14 06:05:14 localhost ceph-mon[307093]: Cluster is now healthy Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'osd_perf_query' Oct 14 06:05:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:14.922+0000 7ff66a216140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 14 06:05:14 localhost ceph-mgr[300442]: mgr[py] Loading python module 'osd_support' Oct 14 06:05:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:14.985+0000 7ff66a216140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'pg_autoscaler' Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.039+0000 7ff66a216140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'progress' Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.103+0000 7ff66a216140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'prometheus' Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.160+0000 7ff66a216140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rbd_support' Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.449+0000 7ff66a216140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.528+0000 7ff66a216140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'restful' Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rgw' Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost ceph-mgr[300442]: mgr[py] Loading python module 'rook' Oct 14 06:05:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:15.856+0000 7ff66a216140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.963 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.964 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.966 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.966 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:05:15 localhost nova_compute[295778]: 2025-10-14 10:05:15.974 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'selftest' Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.273+0000 7ff66a216140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'snap_schedule' Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.333+0000 7ff66a216140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'stats' Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'status' Oct 14 06:05:16 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:05:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2881224626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.490 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'telegraf' Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.529+0000 7ff66a216140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'telemetry' Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.602+0000 7ff66a216140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.688 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.689 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12347MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.690 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.690 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'test_orchestrator' Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.747+0000 7ff66a216140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.813 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.814 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:05:16 localhost nova_compute[295778]: 2025-10-14 10:05:16.842 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:16.896+0000 7ff66a216140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 14 06:05:16 localhost ceph-mgr[300442]: mgr[py] Loading python module 'volumes' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:05:16 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd/host:np0005486730", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:05:16 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:05:16 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:05:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:16 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:16 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:17 localhost ceph-mgr[300442]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 14 06:05:17 localhost ceph-mgr[300442]: mgr[py] Loading python module 'zabbix' Oct 14 06:05:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:17.086+0000 7ff66a216140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 14 06:05:17 localhost ceph-mgr[300442]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 14 06:05:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:05:17.143+0000 7ff66a216140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 14 06:05:17 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f1e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Oct 14 06:05:17 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.107:6810/1583953055 Oct 14 06:05:17 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:05:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2562003662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:05:17 localhost nova_compute[295778]: 2025-10-14 10:05:17.315 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:05:17 localhost nova_compute[295778]: 2025-10-14 10:05:17.322 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:05:17 localhost nova_compute[295778]: 2025-10-14 10:05:17.339 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:05:17 localhost nova_compute[295778]: 2025-10-14 10:05:17.342 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:05:17 localhost nova_compute[295778]: 2025-10-14 10:05:17.342 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:17 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:19 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:05:19 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:05:19 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:05:19 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:05:19 localhost ceph-mon[307093]: Updating np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:19 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:19 localhost nova_compute[295778]: 2025-10-14 10:05:19.343 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:19 localhost nova_compute[295778]: 2025-10-14 10:05:19.344 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:19 localhost nova_compute[295778]: 2025-10-14 10:05:19.344 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:19 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:05:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3573965965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:05:19 localhost nova_compute[295778]: 2025-10-14 10:05:19.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:19 localhost nova_compute[295778]: 2025-10-14 10:05:19.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:20 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:05:20 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:05:20 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:05:20 localhost ceph-mon[307093]: Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 14 06:05:20 localhost ceph-mon[307093]: Health check failed: 2 stray host(s) with 2 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 14 06:05:20 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:20 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486730.ddfidc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.918 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.918 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.919 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:05:20 localhost nova_compute[295778]: 2025-10-14 10:05:20.919 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:05:21 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486730.ddfidc (monmap changed)... Oct 14 06:05:21 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486730.ddfidc on np0005486730.localdomain Oct 14 06:05:21 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:21 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:21 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:21 localhost podman[313010]: Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.866517416 +0000 UTC m=+0.076529379 container create a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:05:21 localhost systemd[1]: Started libpod-conmon-a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c.scope. Oct 14 06:05:21 localhost systemd[1]: Started libcrun container. Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.835006485 +0000 UTC m=+0.045018518 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.936506321 +0000 UTC m=+0.146518294 container init a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.943843974 +0000 UTC m=+0.153855937 container start a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, vendor=Red Hat, Inc., vcs-type=git, version=7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True) Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.944217014 +0000 UTC m=+0.154229037 container attach a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, ceph=True, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Oct 14 06:05:21 localhost musing_newton[313023]: 167 167 Oct 14 06:05:21 localhost systemd[1]: libpod-a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c.scope: Deactivated successfully. Oct 14 06:05:21 localhost podman[313010]: 2025-10-14 10:05:21.949982796 +0000 UTC m=+0.159994779 container died a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:05:22 localhost podman[313028]: 2025-10-14 10:05:22.037503653 +0000 UTC m=+0.074926225 container remove a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_newton, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhceph, release=553, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:05:22 localhost systemd[1]: libpod-conmon-a561411e2baf170a0775c6f1d507ead79f002e3e7f2f464841477f9e3787e10c.scope: Deactivated successfully. Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:05:22 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:05:22 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:05:22 localhost podman[313098]: Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.700067051 +0000 UTC m=+0.073797506 container create b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, version=7, name=rhceph, vcs-type=git, release=553) Oct 14 06:05:22 localhost systemd[1]: Started libpod-conmon-b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472.scope. Oct 14 06:05:22 localhost systemd[1]: Started libcrun container. Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.768763662 +0000 UTC m=+0.142494107 container init b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.670717427 +0000 UTC m=+0.044447912 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.777096681 +0000 UTC m=+0.150827126 container start b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.777534444 +0000 UTC m=+0.151264959 container attach b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 14 06:05:22 localhost infallible_lederberg[313114]: 167 167 Oct 14 06:05:22 localhost systemd[1]: libpod-b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472.scope: Deactivated successfully. Oct 14 06:05:22 localhost podman[313098]: 2025-10-14 10:05:22.781852547 +0000 UTC m=+0.155582992 container died b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Oct 14 06:05:22 localhost systemd[1]: var-lib-containers-storage-overlay-ba6e2f8781435b9abb3171e98ef0fa56ab20745def971220065372c637dffbf5-merged.mount: Deactivated successfully. Oct 14 06:05:22 localhost podman[313119]: 2025-10-14 10:05:22.872480336 +0000 UTC m=+0.078096409 container remove b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_lederberg, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:05:22 localhost systemd[1]: libpod-conmon-b19fe849e7adac41c9b43e9b832a9389e94bb55c8657be819a8dd5aec35c0472.scope: Deactivated successfully. Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:23 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:05:23 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:05:23 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:05:23 localhost podman[313196]: Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.71658902 +0000 UTC m=+0.074343911 container create 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Oct 14 06:05:23 localhost systemd[1]: Started libpod-conmon-5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77.scope. Oct 14 06:05:23 localhost systemd[1]: Started libcrun container. Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.785183188 +0000 UTC m=+0.142938099 container init 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.687440341 +0000 UTC m=+0.045195252 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.802448513 +0000 UTC m=+0.160203404 container start 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container) Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.802843444 +0000 UTC m=+0.160598375 container attach 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, name=rhceph, CEPH_POINT_RELEASE=) Oct 14 06:05:23 localhost intelligent_chaplygin[313211]: 167 167 Oct 14 06:05:23 localhost systemd[1]: libpod-5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77.scope: Deactivated successfully. Oct 14 06:05:23 localhost podman[313196]: 2025-10-14 10:05:23.806791198 +0000 UTC m=+0.164546119 container died 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 14 06:05:23 localhost systemd[1]: tmp-crun.v2WIdK.mount: Deactivated successfully. Oct 14 06:05:23 localhost systemd[1]: var-lib-containers-storage-overlay-1681e58e9dc46b55e27035a38006998c6aa700b0fa6247d658311699ced3f832-merged.mount: Deactivated successfully. Oct 14 06:05:23 localhost podman[313216]: 2025-10-14 10:05:23.902889401 +0000 UTC m=+0.085037943 container remove 5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_chaplygin, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main) Oct 14 06:05:23 localhost systemd[1]: libpod-conmon-5e786483b1d4336ff34a9827a7b59495ff90479a7a7753ade323d7701884aa77.scope: Deactivated successfully. Oct 14 06:05:24 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:24 localhost podman[313292]: Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.764844915 +0000 UTC m=+0.077870184 container create 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, architecture=x86_64, release=553, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7) Oct 14 06:05:24 localhost systemd[1]: Started libpod-conmon-0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e.scope. Oct 14 06:05:24 localhost systemd[1]: Started libcrun container. Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.823612755 +0000 UTC m=+0.136638054 container init 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.733269393 +0000 UTC m=+0.046294712 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.833044783 +0000 UTC m=+0.146070052 container start 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.833539166 +0000 UTC m=+0.146564495 container attach 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Oct 14 06:05:24 localhost epic_khorana[313307]: 167 167 Oct 14 06:05:24 localhost systemd[1]: libpod-0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e.scope: Deactivated successfully. Oct 14 06:05:24 localhost podman[313292]: 2025-10-14 10:05:24.836646958 +0000 UTC m=+0.149672267 container died 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, ceph=True, RELEASE=main, GIT_CLEAN=True, release=553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:05:24 localhost systemd[1]: var-lib-containers-storage-overlay-8fbfb5b675e031006c9612db0119c060b47c753e8da3420a4f2c50ea0ec650fa-merged.mount: Deactivated successfully. Oct 14 06:05:24 localhost podman[313312]: 2025-10-14 10:05:24.936141621 +0000 UTC m=+0.090276041 container remove 0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_khorana, build-date=2025-09-24T08:57:55, version=7, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:05:24 localhost systemd[1]: libpod-conmon-0a8aa1b228e1fba3afb73e2077e1da712c636a78677683690d111841d0f5908e.scope: Deactivated successfully. Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:25 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:25 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:25 localhost podman[313381]: Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.647378982 +0000 UTC m=+0.076814526 container create ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph) Oct 14 06:05:25 localhost systemd[1]: Started libpod-conmon-ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce.scope. Oct 14 06:05:25 localhost systemd[1]: Started libcrun container. Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.700283596 +0000 UTC m=+0.129719160 container init ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=553, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.708675608 +0000 UTC m=+0.138111182 container start ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.708986066 +0000 UTC m=+0.138421640 container attach ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, distribution-scope=public, release=553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Oct 14 06:05:25 localhost vigilant_keller[313396]: 167 167 Oct 14 06:05:25 localhost systemd[1]: libpod-ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce.scope: Deactivated successfully. Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.714198963 +0000 UTC m=+0.143634537 container died ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, ceph=True, architecture=x86_64, release=553, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:05:25 localhost podman[313381]: 2025-10-14 10:05:25.621001417 +0000 UTC m=+0.050437011 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:25 localhost podman[313401]: 2025-10-14 10:05:25.805889651 +0000 UTC m=+0.083748279 container remove ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_keller, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 06:05:25 localhost systemd[1]: libpod-conmon-ed406ec57f13639502fb8dffb457971a46443ec2ceca142a3b2c032a5faeecce.scope: Deactivated successfully. Oct 14 06:05:25 localhost systemd[1]: var-lib-containers-storage-overlay-f64c2494e9d5d2d59050d8e0201307cc0abdc523c53583ffd1f190e1cff68d3a-merged.mount: Deactivated successfully. Oct 14 06:05:26 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:05:26 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:05:26 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:26 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:26 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:26 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:27 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:05:27 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:05:27 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:27 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:27 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:27 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:05:28 localhost ceph-mon[307093]: Saving service mon spec with placement label:mon Oct 14 06:05:28 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:05:28 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:05:28 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:28 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:28 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:28 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:28 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:05:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:05:28 localhost podman[313418]: 2025-10-14 10:05:28.539529008 +0000 UTC m=+0.078861430 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm) Oct 14 06:05:28 localhost podman[313418]: 2025-10-14 10:05:28.575093266 +0000 UTC m=+0.114425728 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:05:28 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:05:29 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:05:29 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:29 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:29 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:30 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:05:30 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:05:30 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:30 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:30 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:30 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:30 localhost podman[246584]: time="2025-10-14T10:05:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:05:30 localhost podman[246584]: @ - - [14/Oct/2025:10:05:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:05:30 localhost podman[246584]: @ - - [14/Oct/2025:10:05:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18341 "" "Go-http-client/1.1" Oct 14 06:05:31 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:05:31 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:05:31 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:31 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:05:32 localhost ceph-mon[307093]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:05:32 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:05:32 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:32 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:32 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:32 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:33 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:05:33 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:05:33 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:33 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:33 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:05:33 localhost openstack_network_exporter[248748]: ERROR 10:05:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:05:33 localhost openstack_network_exporter[248748]: ERROR 10:05:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:05:33 localhost openstack_network_exporter[248748]: ERROR 10:05:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:05:33 localhost openstack_network_exporter[248748]: ERROR 10:05:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:05:33 localhost openstack_network_exporter[248748]: Oct 14 06:05:33 localhost openstack_network_exporter[248748]: ERROR 10:05:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:05:33 localhost openstack_network_exporter[248748]: Oct 14 06:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:05:33 localhost podman[313437]: 2025-10-14 10:05:33.554268483 +0000 UTC m=+0.089580542 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:05:33 localhost podman[313437]: 2025-10-14 10:05:33.565184921 +0000 UTC m=+0.100497020 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:05:33 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:05:33 localhost systemd[1]: tmp-crun.RVJvtE.mount: Deactivated successfully. Oct 14 06:05:33 localhost podman[313438]: 2025-10-14 10:05:33.663149664 +0000 UTC m=+0.194789296 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:05:33 localhost podman[313438]: 2025-10-14 10:05:33.674241536 +0000 UTC m=+0.205881218 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:05:33 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:05:34 localhost ceph-mon[307093]: mon.np0005486731@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:34 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:05:34 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:05:34 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:34 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:34 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:34 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:34 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:05:35 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:05:35 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:35 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:35 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:05:35 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.668935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335669009, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1522, "num_deletes": 260, "total_data_size": 6663931, "memory_usage": 6908360, "flush_reason": "Manual Compaction"} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335702213, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 4066609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15514, "largest_seqno": 17031, "table_properties": {"data_size": 4059819, "index_size": 3679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17022, "raw_average_key_size": 21, "raw_value_size": 4045169, "raw_average_value_size": 5088, "num_data_blocks": 154, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436312, "oldest_key_time": 1760436312, "file_creation_time": 1760436335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 33334 microseconds, and 8817 cpu microseconds. Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.702275) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 4066609 bytes OK Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.702305) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.703901) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.703922) EVENT_LOG_v1 {"time_micros": 1760436335703916, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.703949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 6656017, prev total WAL file size 6656017, number of live WAL files 2. Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.705348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323632' seq:72057594037927935, type:22 .. '6B760031353230' seq:0, type:0; will stop at (end) Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3971KB)], [24(13MB)] Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335705394, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 18061010, "oldest_snapshot_seqno": -1} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11166 keys, 17010757 bytes, temperature: kUnknown Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335808354, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17010757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16947943, "index_size": 33785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 300103, "raw_average_key_size": 26, "raw_value_size": 16758190, "raw_average_value_size": 1500, "num_data_blocks": 1270, "num_entries": 11166, "num_filter_entries": 11166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.808849) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17010757 bytes Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.810863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.2 rd, 165.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 13.3 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(8.6) write-amplify(4.2) OK, records in: 11692, records dropped: 526 output_compression: NoCompression Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.810895) EVENT_LOG_v1 {"time_micros": 1760436335810879, "job": 12, "event": "compaction_finished", "compaction_time_micros": 103112, "compaction_time_cpu_micros": 45154, "output_level": 6, "num_output_files": 1, "total_output_size": 17010757, "num_input_records": 11692, "num_output_records": 11166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335811836, "job": 12, "event": "table_file_deletion", "file_number": 26} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436335814245, "job": 12, "event": "table_file_deletion", "file_number": 24} Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.705274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.814400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.814409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.814413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.814416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:35 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:05:35.814419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:36 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:05:36 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:36 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:05:36 localhost sshd[313478]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:05:37 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f1e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@2(peon) e13 my rank is now 1 (was 2) Oct 14 06:05:37 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:05:37 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:05:37 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:05:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:05:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:05:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:05:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:05:37 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:37 localhost ceph-mon[307093]: paxos.1).electionLogic(52) init, last seen epoch 52 Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:37 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:37 localhost ceph-mds[299096]: --2- [v2:172.18.0.106:6808/799411272,v1:172.18.0.106:6809/799411272] >> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] conn(0x55d556bd0400 0x55d556bdeb00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Oct 14 06:05:37 localhost ceph-osd[32282]: --2- [v2:172.18.0.106:6804/3908858921,v1:172.18.0.106:6805/3908858921] >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x557c222ff000 0x557c1faeb700 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Oct 14 06:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:05:41 localhost podman[313480]: 2025-10-14 10:05:41.541519224 +0000 UTC m=+0.080059022 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd) Oct 14 06:05:41 localhost podman[313480]: 2025-10-14 10:05:41.582206266 +0000 UTC m=+0.120746064 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:05:41 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 14 06:05:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 14 06:05:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:05:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:05:42 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:05:42 localhost ceph-mon[307093]: Remove daemons mon.np0005486730 Oct 14 06:05:42 localhost ceph-mon[307093]: Safe to remove mon.np0005486730: new quorum should be ['np0005486733', 'np0005486731', 'np0005486732'] (from ['np0005486733', 'np0005486731', 'np0005486732']) Oct 14 06:05:42 localhost ceph-mon[307093]: Removing monitor np0005486730 from monmap... Oct 14 06:05:42 localhost ceph-mon[307093]: Removing daemon mon.np0005486730 from np0005486730.localdomain -- ports [] Oct 14 06:05:42 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486733 is new leader, mons np0005486733,np0005486731 in quorum (ranks 0,1) Oct 14 06:05:42 localhost ceph-mon[307093]: Health check failed: 1/3 mons down, quorum np0005486733,np0005486731 (MON_DOWN) Oct 14 06:05:42 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005486733,np0005486731 Oct 14 06:05:42 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:05:42 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:42 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:05:42 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:42 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:42 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:05:42 localhost ceph-mon[307093]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005486733,np0005486731 Oct 14 06:05:42 localhost ceph-mon[307093]: mon.np0005486732 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Oct 14 06:05:42 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:42 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:05:42 localhost podman[313499]: 2025-10-14 10:05:42.543404956 +0000 UTC m=+0.084602781 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:05:42 localhost podman[313499]: 2025-10-14 10:05:42.581264895 +0000 UTC m=+0.122462740 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:05:42 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:05:43 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:05:43 localhost ceph-mon[307093]: Deploying daemon mon.np0005486730 on np0005486730.localdomain Oct 14 06:05:43 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:43 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:43 localhost ceph-mon[307093]: paxos.1).electionLogic(55) init, last seen epoch 55, mid-election, bumping Oct 14 06:05:43 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:43 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:43 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:44 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon rm", "name": "np0005486730"} : dispatch Oct 14 06:05:44 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:05:44 localhost ceph-mon[307093]: Removed label mon from host np0005486730.localdomain Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486733 is new leader, mons np0005486733,np0005486731,np0005486732 in quorum (ranks 0,1,2) Oct 14 06:05:44 localhost ceph-mon[307093]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005486733,np0005486731) Oct 14 06:05:44 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:44 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:05:44 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:44 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:05:44 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:44 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:44 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:05:44 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:05:45 localhost podman[313535]: 2025-10-14 10:05:45.376956967 +0000 UTC m=+0.085947197 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:05:45 localhost podman[313536]: 2025-10-14 10:05:45.435329726 +0000 UTC m=+0.140542907 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:05:45 localhost podman[313535]: 2025-10-14 10:05:45.466526299 +0000 UTC m=+0.175516529 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41) Oct 14 06:05:45 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:05:45 localhost ceph-mon[307093]: Removed label mgr from host np0005486730.localdomain Oct 14 06:05:45 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:45 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:45 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:45 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:45 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:45 localhost podman[313537]: 2025-10-14 10:05:45.522400582 +0000 UTC m=+0.225062965 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:45 localhost podman[313536]: 2025-10-14 10:05:45.551399056 +0000 UTC m=+0.256612227 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:05:45 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:05:45 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513ef20 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:05:45 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:45 localhost ceph-mon[307093]: paxos.1).electionLogic(58) init, last seen epoch 58 Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:45 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:45 localhost podman[313537]: 2025-10-14 10:05:45.6099519 +0000 UTC m=+0.312614383 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:05:45 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:05:46 localhost systemd[1]: tmp-crun.CjbUlV.mount: Deactivated successfully. Oct 14 06:05:46 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:46 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:47 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:05:47 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:05:47 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:48 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:48 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_auth_request failed to assign global_id Oct 14 06:05:48 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_auth_request failed to assign global_id Oct 14 06:05:49 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_auth_request failed to assign global_id Oct 14 06:05:49 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:49 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_auth_request failed to assign global_id Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:50 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486730 calling monitor election Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486733 is new leader, mons np0005486733,np0005486731,np0005486732,np0005486730 in quorum (ranks 0,1,2,3) Oct 14 06:05:50 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:50 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:05:50 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:50 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:05:50 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:50 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:50 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:05:50 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:50 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:05:50 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:05:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:05:50 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:05:51 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486730"} v 0) Oct 14 06:05:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486730"} : dispatch Oct 14 06:05:51 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:51 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:51 localhost ceph-mon[307093]: Removing daemon mgr.np0005486730.ddfidc from np0005486730.localdomain -- ports [8765] Oct 14 06:05:51 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:51 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:51 localhost ceph-mon[307093]: Removed label _admin from host np0005486730.localdomain Oct 14 06:05:51 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:05:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/430439356' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:05:51 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:05:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/430439356' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005486730.ddfidc"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth rm", "entity": "mgr.np0005486730.ddfidc"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "mon ok-to-stop", "ids": ["np0005486730"]} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon ok-to-stop", "ids": ["np0005486730"]} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "quorum_status"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "quorum_status"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e14 handle_command mon_command({"prefix": "mon rm", "name": "np0005486730"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon rm", "name": "np0005486730"} : dispatch Oct 14 06:05:53 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:05:53 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:05:53 localhost ceph-mon[307093]: paxos.1).electionLogic(62) init, last seen epoch 62 Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:53 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:54 localhost ceph-mon[307093]: Safe to remove mon.np0005486730: new quorum should be ['np0005486733', 'np0005486731', 'np0005486732'] (from ['np0005486733', 'np0005486731', 'np0005486732']) Oct 14 06:05:54 localhost ceph-mon[307093]: Removing monitor np0005486730 from monmap... Oct 14 06:05:54 localhost ceph-mon[307093]: Removing daemon mon.np0005486730 from np0005486730.localdomain -- ports [] Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486733 is new leader, mons np0005486733,np0005486731,np0005486732 in quorum (ranks 0,1,2) Oct 14 06:05:54 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:54 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:05:54 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:05:54 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:05:54 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:05:54 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:05:54 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:05:54 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:05:54 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:05:55 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:05:55 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:55 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:55 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:05:56 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:05:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:05:56 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:05:57 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:57 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:57 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:05:57 localhost ceph-mon[307093]: Removing np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:57 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:57 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:57 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:05:57 localhost ceph-mon[307093]: Removing np0005486730.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:05:57 localhost ceph-mon[307093]: Removing np0005486730.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:05:57 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:57 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:57 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:05:57.631 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:05:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:05:57.633 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:05:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:05:57.633 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:05:57 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:05:57 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:58 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:05:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:05:58 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:58 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:58 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486730.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:59 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:05:59 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain.devices.0}] v 0) Oct 14 06:05:59 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486730.localdomain}] v 0) Oct 14 06:05:59 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:05:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:59 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:05:59 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:05:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:05:59 localhost podman[313975]: 2025-10-14 10:05:59.522038129 +0000 UTC m=+0.101169918 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:05:59 localhost podman[313975]: 2025-10-14 10:05:59.561451888 +0000 UTC m=+0.140583637 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm) Oct 14 06:05:59 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:05:59 localhost podman[314029]: Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.892502466 +0000 UTC m=+0.058605627 container create 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, maintainer=Guillaume Abrioux ) Oct 14 06:05:59 localhost ceph-mon[307093]: Reconfiguring crash.np0005486730 (monmap changed)... Oct 14 06:05:59 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486730 on np0005486730.localdomain Oct 14 06:05:59 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:59 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:05:59 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:59 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:05:59 localhost systemd[1]: Started libpod-conmon-9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d.scope. Oct 14 06:05:59 localhost systemd[1]: Started libcrun container. Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.863881861 +0000 UTC m=+0.029984982 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.965278844 +0000 UTC m=+0.131381975 container init 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.33.12, ceph=True, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.977998 +0000 UTC m=+0.144101131 container start 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55) Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.978307758 +0000 UTC m=+0.144410929 container attach 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, com.redhat.component=rhceph-container, release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public) Oct 14 06:05:59 localhost laughing_keldysh[314045]: 167 167 Oct 14 06:05:59 localhost systemd[1]: libpod-9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d.scope: Deactivated successfully. Oct 14 06:05:59 localhost podman[314029]: 2025-10-14 10:05:59.982373305 +0000 UTC m=+0.148476406 container died 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main) Oct 14 06:06:00 localhost podman[314050]: 2025-10-14 10:06:00.08076532 +0000 UTC m=+0.085146007 container remove 9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_keldysh, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True) Oct 14 06:06:00 localhost systemd[1]: libpod-conmon-9f59e44631ced44b016eb4a67bfd143ef37dacd9d9d8aac1fd2ab98bbd82653d.scope: Deactivated successfully. Oct 14 06:06:00 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:00 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:00 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 14 06:06:00 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:00 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:00 localhost systemd[1]: var-lib-containers-storage-overlay-50bc6ebafdddc7f8e57dc8c8027c31de12016df27bf51d9f821f16e4b40d0e6d-merged.mount: Deactivated successfully. Oct 14 06:06:00 localhost podman[246584]: time="2025-10-14T10:06:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:06:00 localhost podman[246584]: @ - - [14/Oct/2025:10:06:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:06:00 localhost podman[246584]: @ - - [14/Oct/2025:10:06:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18341 "" "Go-http-client/1.1" Oct 14 06:06:00 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:06:00 localhost podman[314116]: Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.847907203 +0000 UTC m=+0.072719358 container create 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph) Oct 14 06:06:00 localhost systemd[1]: Started libpod-conmon-249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973.scope. Oct 14 06:06:00 localhost systemd[1]: Started libcrun container. Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.906066367 +0000 UTC m=+0.130878512 container init 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.915803024 +0000 UTC m=+0.140615169 container start 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, version=7) Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.916079111 +0000 UTC m=+0.140891266 container attach 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, CEPH_POINT_RELEASE=) Oct 14 06:06:00 localhost crazy_austin[314131]: 167 167 Oct 14 06:06:00 localhost systemd[1]: libpod-249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973.scope: Deactivated successfully. Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.819497994 +0000 UTC m=+0.044310149 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:00 localhost podman[314116]: 2025-10-14 10:06:00.91907953 +0000 UTC m=+0.143891685 container died 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:06:01 localhost podman[314136]: 2025-10-14 10:06:01.01620367 +0000 UTC m=+0.084465788 container remove 249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_austin, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Oct 14 06:06:01 localhost systemd[1]: libpod-conmon-249cf92dcbc20e8fc94bc34b35e87ddc31d782955af845fc57b07d38dc142973.scope: Deactivated successfully. Oct 14 06:06:01 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:06:01 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:06:01 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:01 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:06:01 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:01 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:01 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:06:01 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:01 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:01 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:01 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 14 06:06:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:01 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:01 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:01 localhost systemd[1]: var-lib-containers-storage-overlay-861a4ab64144d5f2392a8887c365eb83412b6a181aa1f8a0313db345a21e074a-merged.mount: Deactivated successfully. Oct 14 06:06:01 localhost podman[314212]: Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.892171794 +0000 UTC m=+0.073156280 container create baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=) Oct 14 06:06:01 localhost systemd[1]: Started libpod-conmon-baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe.scope. Oct 14 06:06:01 localhost systemd[1]: Started libcrun container. Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.950340877 +0000 UTC m=+0.131325373 container init baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.862337917 +0000 UTC m=+0.043322423 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.964706886 +0000 UTC m=+0.145691372 container start baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, version=7, name=rhceph, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.965084716 +0000 UTC m=+0.146069242 container attach baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:06:01 localhost nifty_nash[314227]: 167 167 Oct 14 06:06:01 localhost systemd[1]: libpod-baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe.scope: Deactivated successfully. Oct 14 06:06:01 localhost podman[314212]: 2025-10-14 10:06:01.96788517 +0000 UTC m=+0.148869706 container died baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Oct 14 06:06:02 localhost podman[314232]: 2025-10-14 10:06:02.096282925 +0000 UTC m=+0.116175214 container remove baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_nash, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux , release=553, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph) Oct 14 06:06:02 localhost systemd[1]: libpod-conmon-baa7a01ea3bcbaf82ab1c56289b1d568e6f4d89efdfca2d86560d8c27dba20fe.scope: Deactivated successfully. Oct 14 06:06:02 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:02 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:02 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:02 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:06:02 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:06:02 localhost systemd[1]: tmp-crun.QRS9zw.mount: Deactivated successfully. Oct 14 06:06:02 localhost systemd[1]: var-lib-containers-storage-overlay-bde482b5258ae7b978ef9e731961e58a0f9f5246af8a6f84dd8004ee59a8c0b8-merged.mount: Deactivated successfully. Oct 14 06:06:02 localhost podman[314309]: Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.882762269 +0000 UTC m=+0.067507220 container create ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhceph, ceph=True, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git) Oct 14 06:06:02 localhost systemd[1]: Started libpod-conmon-ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793.scope. Oct 14 06:06:02 localhost systemd[1]: Started libcrun container. Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.946290213 +0000 UTC m=+0.131035164 container init ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , ceph=True, version=7) Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.95523116 +0000 UTC m=+0.139976131 container start ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, release=553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.955891287 +0000 UTC m=+0.140636298 container attach ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True) Oct 14 06:06:02 localhost relaxed_spence[314324]: 167 167 Oct 14 06:06:02 localhost systemd[1]: libpod-ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793.scope: Deactivated successfully. Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.85889605 +0000 UTC m=+0.043641021 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:02 localhost podman[314309]: 2025-10-14 10:06:02.958500366 +0000 UTC m=+0.143245337 container died ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:06:03 localhost podman[314329]: 2025-10-14 10:06:03.051393135 +0000 UTC m=+0.080130014 container remove ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_spence, description=Red Hat Ceph Storage 7, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12) Oct 14 06:06:03 localhost systemd[1]: libpod-conmon-ead2362079cfa25ed4088db3fc18f6f7bfff54106585be7f7a0c9accc8c47793.scope: Deactivated successfully. Oct 14 06:06:03 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:03 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:03 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:03 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:03 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:06:03 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:03 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:03 localhost openstack_network_exporter[248748]: ERROR 10:06:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:06:03 localhost openstack_network_exporter[248748]: ERROR 10:06:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:06:03 localhost openstack_network_exporter[248748]: ERROR 10:06:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:06:03 localhost openstack_network_exporter[248748]: ERROR 10:06:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:06:03 localhost openstack_network_exporter[248748]: Oct 14 06:06:03 localhost openstack_network_exporter[248748]: ERROR 10:06:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:06:03 localhost openstack_network_exporter[248748]: Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.436564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363436633, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1186, "num_deletes": 257, "total_data_size": 1588574, "memory_usage": 1622384, "flush_reason": "Manual Compaction"} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363448848, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 930590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17036, "largest_seqno": 18217, "table_properties": {"data_size": 925418, "index_size": 2323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15395, "raw_average_key_size": 22, "raw_value_size": 913365, "raw_average_value_size": 1317, "num_data_blocks": 98, "num_entries": 693, "num_filter_entries": 693, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436336, "oldest_key_time": 1760436336, "file_creation_time": 1760436363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12326 microseconds, and 3733 cpu microseconds. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.448898) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 930590 bytes OK Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.448920) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.450927) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.450946) EVENT_LOG_v1 {"time_micros": 1760436363450940, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.450966) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1582199, prev total WAL file size 1582523, number of live WAL files 2. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.451684) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373632' seq:72057594037927935, type:22 .. '6C6F676D0034303135' seq:0, type:0; will stop at (end) Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(908KB)], [27(16MB)] Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363451753, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 17941347, "oldest_snapshot_seqno": -1} Oct 14 06:06:03 localhost systemd[1]: var-lib-containers-storage-overlay-490d14aa8e8d78da6de7277b0bec5e8a9564087a4a41098fbaf40cfa7343d944-merged.mount: Deactivated successfully. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11310 keys, 17794452 bytes, temperature: kUnknown Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363539709, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17794452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17729637, "index_size": 35466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28293, "raw_key_size": 305365, "raw_average_key_size": 26, "raw_value_size": 17536244, "raw_average_value_size": 1550, "num_data_blocks": 1339, "num_entries": 11310, "num_filter_entries": 11310, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.540052) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17794452 bytes Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.547291) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 201.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.2 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(38.4) write-amplify(19.1) OK, records in: 11859, records dropped: 549 output_compression: NoCompression Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.547325) EVENT_LOG_v1 {"time_micros": 1760436363547310, "job": 14, "event": "compaction_finished", "compaction_time_micros": 88114, "compaction_time_cpu_micros": 30613, "output_level": 6, "num_output_files": 1, "total_output_size": 17794452, "num_input_records": 11859, "num_output_records": 11310, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363547567, "job": 14, "event": "table_file_deletion", "file_number": 29} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436363549892, "job": 14, "event": "table_file_deletion", "file_number": 27} Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.451597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.549933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.549939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.549941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.549944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:03.549945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:06:03 localhost podman[314398]: 2025-10-14 10:06:03.792600006 +0000 UTC m=+0.079434536 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009) Oct 14 06:06:03 localhost podman[314412]: Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.817217954 +0000 UTC m=+0.079781254 container create 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, release=553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Oct 14 06:06:03 localhost podman[314398]: 2025-10-14 10:06:03.829127539 +0000 UTC m=+0.115962109 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:06:03 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:06:03 localhost systemd[1]: Started libpod-conmon-6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e.scope. Oct 14 06:06:03 localhost systemd[1]: Started libcrun container. Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.885932816 +0000 UTC m=+0.148496156 container init 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55) Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.792560585 +0000 UTC m=+0.055123925 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.894547013 +0000 UTC m=+0.157110333 container start 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55) Oct 14 06:06:03 localhost festive_wu[314444]: 167 167 Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.896289459 +0000 UTC m=+0.158852819 container attach 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Oct 14 06:06:03 localhost systemd[1]: libpod-6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e.scope: Deactivated successfully. Oct 14 06:06:03 localhost podman[314412]: 2025-10-14 10:06:03.899276078 +0000 UTC m=+0.161839418 container died 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main) Oct 14 06:06:03 localhost podman[314454]: 2025-10-14 10:06:03.990464621 +0000 UTC m=+0.079914127 container remove 6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_wu, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:06:03 localhost systemd[1]: libpod-conmon-6736616dbf8fab5d164ff15092f3b50afd9b94a51e7c792f533c01cb6a0f0c9e.scope: Deactivated successfully. Oct 14 06:06:04 localhost podman[314399]: 2025-10-14 10:06:03.904515565 +0000 UTC m=+0.187921885 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:06:04 localhost podman[314399]: 2025-10-14 10:06:04.040147832 +0000 UTC m=+0.323554222 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:06:04 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:04 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:06:04 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:06:04 localhost ceph-mon[307093]: Added label _no_schedule to host np0005486730.localdomain Oct 14 06:06:04 localhost ceph-mon[307093]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005486730.localdomain Oct 14 06:06:04 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:06:04 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:06:04 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:04 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:04 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:04 localhost systemd[1]: var-lib-containers-storage-overlay-1c02174f2dcd1091ee8223e65ead9d1717a1da684dba5b19c5972660f97a6ad6-merged.mount: Deactivated successfully. Oct 14 06:06:04 localhost podman[314529]: Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.689629744 +0000 UTC m=+0.077480504 container create 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12) Oct 14 06:06:04 localhost systemd[1]: Started libpod-conmon-49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8.scope. Oct 14 06:06:04 localhost systemd[1]: Started libcrun container. Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.743832613 +0000 UTC m=+0.131683373 container init 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, ceph=True, release=553) Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.75282168 +0000 UTC m=+0.140672430 container start 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, release=553, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main) Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.753087167 +0000 UTC m=+0.140937917 container attach 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:06:04 localhost hopeful_shamir[314544]: 167 167 Oct 14 06:06:04 localhost systemd[1]: libpod-49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8.scope: Deactivated successfully. Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.755827309 +0000 UTC m=+0.143678109 container died 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:06:04 localhost podman[314529]: 2025-10-14 10:06:04.657549468 +0000 UTC m=+0.045400228 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:04 localhost podman[314549]: 2025-10-14 10:06:04.847687031 +0000 UTC m=+0.080243066 container remove 49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_shamir, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7) Oct 14 06:06:04 localhost systemd[1]: libpod-conmon-49e099392f7c5b13f5f15db5aaec7b307b7d3239284a3797b35de0d7f11202a8.scope: Deactivated successfully. Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:04 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain"} v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain"} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:06:05 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain"} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain"} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain"}]': finished Oct 14 06:06:05 localhost systemd[1]: var-lib-containers-storage-overlay-924a9125808a5334f472efb614512bc6f7b1715989acfca8d6adcf6e71bcbdf0-merged.mount: Deactivated successfully. Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:06:05 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:06 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:06:06 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:06:06 localhost ceph-mon[307093]: Removed host np0005486730.localdomain Oct 14 06:06:06 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:06 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:06 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:06:06 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:06:06 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:06:06 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:06 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:06 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Oct 14 06:06:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:06:06 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:07 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:07 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:06:07 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:06:07 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:07 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:06:07 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:08 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:08 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:09 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 14 06:06:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 14 06:06:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 14 06:06:09 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:10 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:06:10 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:06:10 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:10 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:10 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:10 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:10 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:10 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:10 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:10 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:11 localhost ceph-mon[307093]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:06:11 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:06:11 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:11 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:11 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:11 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:11 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:11 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:11 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Oct 14 06:06:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:06:11 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:11 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:12 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:06:12 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:06:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:12 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:12 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:06:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:06:12 localhost podman[314565]: 2025-10-14 10:06:12.542423409 +0000 UTC m=+0.082972798 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, org.label-schema.license=GPLv2) Oct 14 06:06:12 localhost podman[314565]: 2025-10-14 10:06:12.584021586 +0000 UTC m=+0.124570935 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:06:12 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:06:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:06:12 localhost podman[314584]: 2025-10-14 10:06:12.70369197 +0000 UTC m=+0.083511712 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid) Oct 14 06:06:12 localhost podman[314584]: 2025-10-14 10:06:12.719327043 +0000 UTC m=+0.099146825 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:06:12 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:06:12 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:12 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:12 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Oct 14 06:06:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:06:12 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:12 localhost nova_compute[295778]: 2025-10-14 10:06:12.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:12 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:06:13 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:06:13 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:06:13 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:13 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:06:13 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:13 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:13 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:13 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:13 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:13 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:13 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:13 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:13 localhost nova_compute[295778]: 2025-10-14 10:06:13.926 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:13 localhost nova_compute[295778]: 2025-10-14 10:06:13.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:06:13 localhost nova_compute[295778]: 2025-10-14 10:06:13.946 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:06:14 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:06:14 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:06:14 localhost ceph-mon[307093]: Saving service mon spec with placement label:mon Oct 14 06:06:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:14 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:14 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:14 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:14 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:06:15 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:06:15 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:15 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:15 localhost ceph-mon[307093]: from='mgr.17415 ' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.096224) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375096264, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 733, "num_deletes": 252, "total_data_size": 1029458, "memory_usage": 1043000, "flush_reason": "Manual Compaction"} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375102694, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 596345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18222, "largest_seqno": 18950, "table_properties": {"data_size": 592548, "index_size": 1524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10166, "raw_average_key_size": 21, "raw_value_size": 584529, "raw_average_value_size": 1246, "num_data_blocks": 63, "num_entries": 469, "num_filter_entries": 469, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436363, "oldest_key_time": 1760436363, "file_creation_time": 1760436375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6538 microseconds, and 2644 cpu microseconds. Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.102763) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 596345 bytes OK Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.102784) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.104423) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.104445) EVENT_LOG_v1 {"time_micros": 1760436375104438, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.104465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1025251, prev total WAL file size 1025251, number of live WAL files 2. Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.105125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(582KB)], [30(16MB)] Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375105190, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 18390797, "oldest_snapshot_seqno": -1} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11253 keys, 15185947 bytes, temperature: kUnknown Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375194387, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 15185947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15123429, "index_size": 33297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28165, "raw_key_size": 305009, "raw_average_key_size": 27, "raw_value_size": 14932865, "raw_average_value_size": 1327, "num_data_blocks": 1246, "num_entries": 11253, "num_filter_entries": 11253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.194716) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 15185947 bytes Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.196700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.9 rd, 170.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 17.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(56.3) write-amplify(25.5) OK, records in: 11779, records dropped: 526 output_compression: NoCompression Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.196752) EVENT_LOG_v1 {"time_micros": 1760436375196717, "job": 16, "event": "compaction_finished", "compaction_time_micros": 89299, "compaction_time_cpu_micros": 41873, "output_level": 6, "num_output_files": 1, "total_output_size": 15185947, "num_input_records": 11779, "num_output_records": 11253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375196972, "job": 16, "event": "table_file_deletion", "file_number": 32} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436375199311, "job": 16, "event": "table_file_deletion", "file_number": 30} Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.104957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.199362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.199368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.199371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.199374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:06:15.199377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "quorum_status"} v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "quorum_status"} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e15 handle_command mon_command({"prefix": "mon rm", "name": "np0005486733"} v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon rm", "name": "np0005486733"} : dispatch Oct 14 06:06:15 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f080 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@1(peon) e16 my rank is now 0 (was 1) Oct 14 06:06:15 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:06:15 localhost ceph-mgr[300442]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 14 06:06:15 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:06:15 localhost ceph-mon[307093]: paxos.0).electionLogic(64) init, last seen epoch 64 Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 is new leader, mons np0005486731,np0005486732 in quorum (ranks 0,1) Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : monmap epoch 16 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : last_changed 2025-10-14T10:06:15.486082+0000 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : created 2025-10-14T07:49:51.150761+0000 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005486731 Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005486732 Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005486732.xkownj=up:active} 2 up:standby Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e82: 6 total, 6 up, 6 in Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e30: np0005486732.pasqzz(active, since 63s), standbys: np0005486733.primvu, np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:15 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:15 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:06:15 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:06:15 localhost ceph-mon[307093]: Remove daemons mon.np0005486733 Oct 14 06:06:15 localhost ceph-mon[307093]: Safe to remove mon.np0005486733: new quorum should be ['np0005486731', 'np0005486732'] (from ['np0005486731', 'np0005486732']) Oct 14 06:06:15 localhost ceph-mon[307093]: Removing monitor np0005486733 from monmap... Oct 14 06:06:15 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon rm", "name": "np0005486733"} : dispatch Oct 14 06:06:15 localhost ceph-mon[307093]: Removing daemon mon.np0005486733 from np0005486733.localdomain -- ports [] Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:06:15 localhost ceph-mon[307093]: mon.np0005486731 is new leader, mons np0005486731,np0005486732 in quorum (ranks 0,1) Oct 14 06:06:15 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:15 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:15 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.923 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.951 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.952 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.952 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.952 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:06:15 localhost nova_compute[295778]: 2025-10-14 10:06:15.953 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:06:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:06:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:06:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1736640914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.402 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:06:16 localhost podman[314644]: 2025-10-14 10:06:16.506773382 +0000 UTC m=+0.069898944 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:06:16 localhost podman[314642]: 2025-10-14 10:06:16.561869425 +0000 UTC m=+0.123577399 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 06:06:16 localhost podman[314642]: 2025-10-14 10:06:16.574013345 +0000 UTC m=+0.135721329 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 06:06:16 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:06:16 localhost podman[314644]: 2025-10-14 10:06:16.593033256 +0000 UTC m=+0.156158838 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:06:16 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:06:16 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.641 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.642 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12264MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.642 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.642 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:06:16 localhost podman[314643]: 2025-10-14 10:06:16.542751421 +0000 UTC m=+0.103266454 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009) Oct 14 06:06:16 localhost podman[314643]: 2025-10-14 10:06:16.673414626 +0000 UTC m=+0.233929659 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:06:16 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.719 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.719 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:06:16 localhost nova_compute[295778]: 2025-10-14 10:06:16.740 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2693381905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:06:17 localhost nova_compute[295778]: 2025-10-14 10:06:17.163 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:06:17 localhost nova_compute[295778]: 2025-10-14 10:06:17.170 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:06:17 localhost nova_compute[295778]: 2025-10-14 10:06:17.187 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:06:17 localhost nova_compute[295778]: 2025-10-14 10:06:17.189 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:06:17 localhost nova_compute[295778]: 2025-10-14 10:06:17.190 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:06:17 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:17 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:17 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:17 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:17 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:06:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:06:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:18 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:18 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:18 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:06:18 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:18 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:06:18 localhost podman[315101]: Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.833686697 +0000 UTC m=+0.074836614 container create 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, release=553, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:06:18 localhost systemd[1]: Started libpod-conmon-0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298.scope. Oct 14 06:06:18 localhost systemd[1]: Started libcrun container. Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.802797753 +0000 UTC m=+0.043947680 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.915881104 +0000 UTC m=+0.157031011 container init 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public) Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.926089543 +0000 UTC m=+0.167239440 container start 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.926368111 +0000 UTC m=+0.167518058 container attach 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container) Oct 14 06:06:18 localhost relaxed_joliot[315116]: 167 167 Oct 14 06:06:18 localhost systemd[1]: libpod-0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298.scope: Deactivated successfully. Oct 14 06:06:18 localhost podman[315101]: 2025-10-14 10:06:18.929578255 +0000 UTC m=+0.170728182 container died 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Oct 14 06:06:19 localhost podman[315121]: 2025-10-14 10:06:19.025098003 +0000 UTC m=+0.081749026 container remove 0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_joliot, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , release=553, ceph=True, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:06:19 localhost systemd[1]: libpod-conmon-0977e28865c858bbde7b5f2722dab98de8af5df2813fc7a598aeeee234bc7298.scope: Deactivated successfully. Oct 14 06:06:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 14 06:06:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:19 localhost nova_compute[295778]: 2025-10-14 10:06:19.171 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:19 localhost nova_compute[295778]: 2025-10-14 10:06:19.173 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:19 localhost podman[315192]: Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.748829723 +0000 UTC m=+0.073030206 container create 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, distribution-scope=public, name=rhceph, architecture=x86_64, vcs-type=git, RELEASE=main, ceph=True) Oct 14 06:06:19 localhost systemd[1]: Started libpod-conmon-460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807.scope. Oct 14 06:06:19 localhost systemd[1]: Started libcrun container. Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.81393657 +0000 UTC m=+0.138137063 container init 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True) Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.719342636 +0000 UTC m=+0.043543179 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.824351904 +0000 UTC m=+0.148552387 container start 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph) Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.824615981 +0000 UTC m=+0.148816494 container attach 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Oct 14 06:06:19 localhost confident_albattani[315207]: 167 167 Oct 14 06:06:19 localhost systemd[1]: libpod-460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807.scope: Deactivated successfully. Oct 14 06:06:19 localhost podman[315192]: 2025-10-14 10:06:19.827525497 +0000 UTC m=+0.151725980 container died 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, io.buildah.version=1.33.12, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 14 06:06:19 localhost systemd[1]: var-lib-containers-storage-overlay-c97311440cde78a03a3c06c72806c7a5060196ca7af2701bb790816c65058998-merged.mount: Deactivated successfully. Oct 14 06:06:19 localhost nova_compute[295778]: 2025-10-14 10:06:19.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:19 localhost nova_compute[295778]: 2025-10-14 10:06:19.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:19 localhost systemd[1]: var-lib-containers-storage-overlay-7fbfd98e7d630f48a5d3ffec94ac980033c4fc26167da10acf9a97d5df53fbf5-merged.mount: Deactivated successfully. Oct 14 06:06:19 localhost podman[315212]: 2025-10-14 10:06:19.930920753 +0000 UTC m=+0.094588944 container remove 460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_albattani, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main) Oct 14 06:06:19 localhost systemd[1]: libpod-conmon-460456ee7b380a1ff8d2654030cd7f2c2743092ff351a8259bca87823cf07807.scope: Deactivated successfully. Oct 14 06:06:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:20 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:20 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:20 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:06:20 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:20 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:06:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 14 06:06:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:06:20 localhost podman[315290]: Oct 14 06:06:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.730140604 +0000 UTC m=+0.085198358 container create 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:06:20 localhost systemd[1]: Started libpod-conmon-84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6.scope. Oct 14 06:06:20 localhost systemd[1]: Started libcrun container. Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.791758029 +0000 UTC m=+0.146815783 container init 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph) Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.696563779 +0000 UTC m=+0.051621573 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.802274095 +0000 UTC m=+0.157331849 container start 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git) Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.802782989 +0000 UTC m=+0.157840823 container attach 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, CEPH_POINT_RELEASE=) Oct 14 06:06:20 localhost quizzical_wozniak[315305]: 167 167 Oct 14 06:06:20 localhost systemd[1]: libpod-84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6.scope: Deactivated successfully. Oct 14 06:06:20 localhost podman[315290]: 2025-10-14 10:06:20.806796875 +0000 UTC m=+0.161854639 container died 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:06:20 localhost systemd[1]: tmp-crun.NH9jcb.mount: Deactivated successfully. Oct 14 06:06:20 localhost systemd[1]: var-lib-containers-storage-overlay-9d14c02a1a2a024e606df2a973bd37a9331120e9b585f75e885d1ea8d8b38ea9-merged.mount: Deactivated successfully. Oct 14 06:06:20 localhost nova_compute[295778]: 2025-10-14 10:06:20.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:20 localhost nova_compute[295778]: 2025-10-14 10:06:20.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:20 localhost nova_compute[295778]: 2025-10-14 10:06:20.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:06:20 localhost nova_compute[295778]: 2025-10-14 10:06:20.906 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:20 localhost nova_compute[295778]: 2025-10-14 10:06:20.907 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:06:20 localhost podman[315310]: 2025-10-14 10:06:20.911569427 +0000 UTC m=+0.096635428 container remove 84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_wozniak, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main) Oct 14 06:06:20 localhost systemd[1]: libpod-conmon-84db7684abe17b0ceeb32ecb678cbddbdf5131a329ce1834a5502194b600d7b6.scope: Deactivated successfully. Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:21 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:21 localhost podman[315386]: Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.706843343 +0000 UTC m=+0.064265455 container create 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, vcs-type=git, RELEASE=main) Oct 14 06:06:21 localhost systemd[1]: Started libpod-conmon-6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48.scope. Oct 14 06:06:21 localhost systemd[1]: Started libcrun container. Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.767270986 +0000 UTC m=+0.124693088 container init 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, ceph=True, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.776021857 +0000 UTC m=+0.133443969 container start 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Oct 14 06:06:21 localhost vibrant_mclean[315401]: 167 167 Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.67867696 +0000 UTC m=+0.036099092 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:21 localhost systemd[1]: libpod-6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48.scope: Deactivated successfully. Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.776241463 +0000 UTC m=+0.133663565 container attach 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Oct 14 06:06:21 localhost podman[315386]: 2025-10-14 10:06:21.782346623 +0000 UTC m=+0.139768725 container died 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 14 06:06:21 localhost systemd[1]: var-lib-containers-storage-overlay-ca1b7918aca81b69b67804c6eacb9030efee31a7c9a2d35be00a1a8b4517c633-merged.mount: Deactivated successfully. Oct 14 06:06:21 localhost podman[315406]: 2025-10-14 10:06:21.872814099 +0000 UTC m=+0.081888481 container remove 6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclean, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Oct 14 06:06:21 localhost systemd[1]: libpod-conmon-6a932ba1609d2c2afbed0f750677b70fba0dfb4e710e86b586ef432f204d3e48.scope: Deactivated successfully. Oct 14 06:06:21 localhost nova_compute[295778]: 2025-10-14 10:06:21.931 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:22 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:06:22 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:06:22 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:22 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:22 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:22 localhost podman[315474]: Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.567379869 +0000 UTC m=+0.079224059 container create 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64) Oct 14 06:06:22 localhost systemd[1]: Started libpod-conmon-85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a.scope. Oct 14 06:06:22 localhost systemd[1]: Started libcrun container. Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.533796055 +0000 UTC m=+0.045640265 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.633514573 +0000 UTC m=+0.145358713 container init 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.642625543 +0000 UTC m=+0.154469683 container start 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12) Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.642902671 +0000 UTC m=+0.154746871 container attach 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main) Oct 14 06:06:22 localhost keen_kirch[315490]: 167 167 Oct 14 06:06:22 localhost systemd[1]: libpod-85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a.scope: Deactivated successfully. Oct 14 06:06:22 localhost podman[315474]: 2025-10-14 10:06:22.64743893 +0000 UTC m=+0.159283100 container died 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True, RELEASE=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:06:22 localhost podman[315495]: 2025-10-14 10:06:22.741616044 +0000 UTC m=+0.085678000 container remove 85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_kirch, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 14 06:06:22 localhost systemd[1]: libpod-conmon-85b1133da9bd72f03359606827568047c5775b0c2aa914bea33f4f74aa6bd53a.scope: Deactivated successfully. Oct 14 06:06:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:22 localhost systemd[1]: var-lib-containers-storage-overlay-98cfede5d71047b159e7ccfcc3d0e59695d67792017f469eb196bb0077875066-merged.mount: Deactivated successfully. Oct 14 06:06:22 localhost nova_compute[295778]: 2025-10-14 10:06:22.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:22 localhost nova_compute[295778]: 2025-10-14 10:06:22.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:06:22 localhost nova_compute[295778]: 2025-10-14 10:06:22.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:06:22 localhost nova_compute[295778]: 2025-10-14 10:06:22.920 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:06:22 localhost nova_compute[295778]: 2025-10-14 10:06:22.921 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:23 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:06:23 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:06:23 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:23 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:23 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 14 06:06:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:06:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:24 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:06:24 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:06:24 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:24 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:24 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:06:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Oct 14 06:06:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:06:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:25 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:06:25 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:06:25 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:25 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:25 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:06:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:26 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:06:26 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:06:26 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:26 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:26 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:26 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:26 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:27 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:06:27 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:06:27 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:27 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:27 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:27 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:28 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:06:28 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:06:28 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:28 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:28 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:06:28 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:28 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:06:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:28 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:28 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Oct 14 06:06:28 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:06:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Oct 14 06:06:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:06:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:29 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:29 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:29 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:29 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:06:29 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:06:29 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:06:29 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:06:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:30 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:30 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:06:30 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:06:30 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:06:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:30 localhost podman[315511]: 2025-10-14 10:06:30.571996119 +0000 UTC m=+0.101395264 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:06:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:30 localhost podman[315511]: 2025-10-14 10:06:30.588044362 +0000 UTC m=+0.117443557 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:06:30 localhost podman[246584]: time="2025-10-14T10:06:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:06:30 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:06:30 localhost podman[246584]: @ - - [14/Oct/2025:10:06:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:06:30 localhost podman[246584]: @ - - [14/Oct/2025:10:06:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18341 "" "Go-http-client/1.1" Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:31 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:06:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:33 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:06:33 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:06:33 localhost ceph-mon[307093]: Deploying daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:06:33 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:33 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:33 localhost openstack_network_exporter[248748]: ERROR 10:06:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:06:33 localhost openstack_network_exporter[248748]: ERROR 10:06:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:06:33 localhost openstack_network_exporter[248748]: ERROR 10:06:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:06:33 localhost openstack_network_exporter[248748]: ERROR 10:06:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:06:33 localhost openstack_network_exporter[248748]: Oct 14 06:06:33 localhost openstack_network_exporter[248748]: ERROR 10:06:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:06:33 localhost openstack_network_exporter[248748]: Oct 14 06:06:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:06:34 localhost podman[315595]: 2025-10-14 10:06:34.555245501 +0000 UTC m=+0.089769457 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:06:34 localhost podman[315595]: 2025-10-14 10:06:34.58589982 +0000 UTC m=+0.120423756 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 06:06:34 localhost podman[315596]: 2025-10-14 10:06:34.605899017 +0000 UTC m=+0.140378922 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:06:34 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:06:34 localhost podman[315596]: 2025-10-14 10:06:34.619098135 +0000 UTC m=+0.153578030 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:06:34 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:06:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:35 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) Oct 14 06:06:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:06:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:06:36 localhost ceph-mon[307093]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) Oct 14 06:06:36 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader).monmap v16 adding/updating np0005486733 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:36 localhost ceph-mgr[300442]: ms_deliver_dispatch: unhandled message 0x55aa8513f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(probing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:06:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(probing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:06:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:06:36 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:06:36 localhost ceph-mon[307093]: paxos.0).electionLogic(66) init, last seen epoch 66 Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:36 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:37 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:37 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:38 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:38 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:39 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:39 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:40 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:40 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:41 localhost ceph-mds[299096]: mds.beacon.mds.np0005486731.onyaog missed beacon ack from the monitors Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 is new leader, mons np0005486731,np0005486732 in quorum (ranks 0,1) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : monmap epoch 17 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : last_changed 2025-10-14T10:06:36.543119+0000 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : created 2025-10-14T07:49:51.150761+0000 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005486731 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005486732 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005486733 Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005486732.xkownj=up:active} 2 up:standby Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e82: 6 total, 6 up, 6 in Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e30: np0005486732.pasqzz(active, since 89s), standbys: np0005486733.primvu, np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005486731,np0005486732 (MON_DOWN) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s); 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005486731,np0005486732 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : daemon mon.np0005486733 on np0005486733.localdomain is in unknown state Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005486731,np0005486732 Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : mon.np0005486733 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:06:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486731 is new leader, mons np0005486731,np0005486732 in quorum (ranks 0,1) Oct 14 06:06:41 localhost ceph-mon[307093]: Health check failed: 1/3 mons down, quorum np0005486731,np0005486732 (MON_DOWN) Oct 14 06:06:41 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s); 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005486731,np0005486732 Oct 14 06:06:41 localhost ceph-mon[307093]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s) Oct 14 06:06:41 localhost ceph-mon[307093]: daemon mon.np0005486733 on np0005486733.localdomain is in unknown state Oct 14 06:06:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:41 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:41 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:41 localhost ceph-mon[307093]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005486731,np0005486732 Oct 14 06:06:41 localhost ceph-mon[307093]: mon.np0005486733 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Oct 14 06:06:41 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:41 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:41 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:42 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:42 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:42 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:06:42 localhost podman[315908]: 2025-10-14 10:06:42.7999815 +0000 UTC m=+0.087751734 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd) Oct 14 06:06:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:06:42 localhost podman[315908]: 2025-10-14 10:06:42.813541638 +0000 UTC m=+0.101311842 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:06:42 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:06:42 localhost podman[315958]: 2025-10-14 10:06:42.908226424 +0000 UTC m=+0.089230904 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid) Oct 14 06:06:42 localhost podman[315958]: 2025-10-14 10:06:42.943464572 +0000 UTC m=+0.124469082 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:06:42 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:06:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486731.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 calling monitor election Oct 14 06:06:43 localhost ceph-mon[307093]: paxos.0).electionLogic(68) init, last seen epoch 68 Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : mon.np0005486731 is new leader, mons np0005486731,np0005486732,np0005486733 in quorum (ranks 0,1,2) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : monmap epoch 17 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsid fcadf6e2-9176-5818-a8d0-37b19acf8eaf Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : last_changed 2025-10-14T10:06:36.543119+0000 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : created 2025-10-14T07:49:51.150761+0000 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005486731 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005486732 Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005486733 Oct 14 06:06:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005486732.xkownj=up:active} 2 up:standby Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e82: 6 total, 6 up, 6 in Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e30: np0005486732.pasqzz(active, since 91s), standbys: np0005486733.primvu, np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005486731,np0005486732) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s); 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s) Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : daemon mon.np0005486733 on np0005486733.localdomain is in unknown state Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:43 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:43 localhost podman[316083]: Oct 14 06:06:43 localhost podman[316083]: 2025-10-14 10:06:43.968687431 +0000 UTC m=+0.085720072 container create 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:06:44 localhost systemd[1]: Started libpod-conmon-1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc.scope. Oct 14 06:06:44 localhost podman[316083]: 2025-10-14 10:06:43.929716323 +0000 UTC m=+0.046748994 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:44 localhost systemd[1]: Started libcrun container. Oct 14 06:06:44 localhost podman[316083]: 2025-10-14 10:06:44.049670406 +0000 UTC m=+0.166703047 container init 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, version=7, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:06:44 localhost podman[316083]: 2025-10-14 10:06:44.059379481 +0000 UTC m=+0.176412122 container start 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7) Oct 14 06:06:44 localhost podman[316083]: 2025-10-14 10:06:44.059634299 +0000 UTC m=+0.176666940 container attach 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, release=553, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=) Oct 14 06:06:44 localhost flamboyant_turing[316098]: 167 167 Oct 14 06:06:44 localhost systemd[1]: libpod-1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc.scope: Deactivated successfully. Oct 14 06:06:44 localhost podman[316083]: 2025-10-14 10:06:44.063158631 +0000 UTC m=+0.180191302 container died 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486732 calling monitor election Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731 calling monitor election Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486733 calling monitor election Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731 is new leader, mons np0005486731,np0005486732,np0005486733 in quorum (ranks 0,1,2) Oct 14 06:06:44 localhost ceph-mon[307093]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005486731,np0005486732) Oct 14 06:06:44 localhost ceph-mon[307093]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s); 2 stray daemon(s) not managed by cephadm; 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:44 localhost ceph-mon[307093]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s) Oct 14 06:06:44 localhost ceph-mon[307093]: daemon mon.np0005486733 on np0005486733.localdomain is in unknown state Oct 14 06:06:44 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm Oct 14 06:06:44 localhost ceph-mon[307093]: stray daemon mgr.np0005486728.giajub on host np0005486728.localdomain not managed by cephadm Oct 14 06:06:44 localhost ceph-mon[307093]: stray daemon mgr.np0005486729.xpybho on host np0005486729.localdomain not managed by cephadm Oct 14 06:06:44 localhost ceph-mon[307093]: [WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm Oct 14 06:06:44 localhost ceph-mon[307093]: stray host np0005486728.localdomain has 1 stray daemons: ['mgr.np0005486728.giajub'] Oct 14 06:06:44 localhost ceph-mon[307093]: stray host np0005486729.localdomain has 1 stray daemons: ['mgr.np0005486729.xpybho'] Oct 14 06:06:44 localhost podman[316103]: 2025-10-14 10:06:44.171376994 +0000 UTC m=+0.098559699 container remove 1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_turing, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e31: np0005486732.pasqzz(active, since 91s), standbys: np0005486733.primvu, np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:44 localhost systemd[1]: libpod-conmon-1990a26355fdfc4cd08930a70deeddf564cdbfc4cd53e0b66d938e4abda0f2fc.scope: Deactivated successfully. Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:06:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:06:44 localhost podman[316170]: Oct 14 06:06:44 localhost podman[316170]: 2025-10-14 10:06:44.925259329 +0000 UTC m=+0.075545602 container create 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, name=rhceph) Oct 14 06:06:44 localhost systemd[1]: Started libpod-conmon-3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1.scope. Oct 14 06:06:44 localhost systemd[1]: Started libcrun container. Oct 14 06:06:44 localhost systemd[1]: var-lib-containers-storage-overlay-5e9cd1454d1515ce21727dc2756c3a110a07b418979904e612bef7b7bdbd4c3d-merged.mount: Deactivated successfully. Oct 14 06:06:44 localhost podman[316170]: 2025-10-14 10:06:44.98676693 +0000 UTC m=+0.137053203 container init 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., version=7, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12) Oct 14 06:06:44 localhost podman[316170]: 2025-10-14 10:06:44.894466758 +0000 UTC m=+0.044753041 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:44 localhost podman[316170]: 2025-10-14 10:06:44.996750154 +0000 UTC m=+0.147036427 container start 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Oct 14 06:06:44 localhost podman[316170]: 2025-10-14 10:06:44.997063872 +0000 UTC m=+0.147350175 container attach 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=) Oct 14 06:06:45 localhost unruffled_poincare[316186]: 167 167 Oct 14 06:06:45 localhost systemd[1]: libpod-3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1.scope: Deactivated successfully. Oct 14 06:06:45 localhost podman[316170]: 2025-10-14 10:06:45.002429883 +0000 UTC m=+0.152716156 container died 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main) Oct 14 06:06:45 localhost podman[316191]: 2025-10-14 10:06:45.10470471 +0000 UTC m=+0.089427158 container remove 3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_poincare, name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Oct 14 06:06:45 localhost systemd[1]: libpod-conmon-3a1b1703f0db385c9a067195f148ae1c64788cf6ddad20536171c207cf3f6ca1.scope: Deactivated successfully. Oct 14 06:06:45 localhost ceph-mon[307093]: Reconfiguring crash.np0005486731 (monmap changed)... Oct 14 06:06:45 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486731 on np0005486731.localdomain Oct 14 06:06:45 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:45 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:45 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s)) Oct 14 06:06:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:06:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:45 localhost podman[316269]: Oct 14 06:06:45 localhost podman[316269]: 2025-10-14 10:06:45.934011053 +0000 UTC m=+0.077142795 container create 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, ceph=True, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , release=553, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64) Oct 14 06:06:45 localhost systemd[1]: Started libpod-conmon-3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38.scope. Oct 14 06:06:45 localhost systemd[1]: var-lib-containers-storage-overlay-d75ae4c714ef13df84674e7a8aa69938060a34614e1f8fa61687e5dbbe4091f6-merged.mount: Deactivated successfully. Oct 14 06:06:45 localhost systemd[1]: Started libcrun container. Oct 14 06:06:46 localhost podman[316269]: 2025-10-14 10:06:45.903893889 +0000 UTC m=+0.047025641 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:46 localhost podman[316269]: 2025-10-14 10:06:46.012500113 +0000 UTC m=+0.155631855 container init 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, io.openshift.tags=rhceph ceph, version=7, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_BRANCH=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph) Oct 14 06:06:46 localhost podman[316269]: 2025-10-14 10:06:46.023652107 +0000 UTC m=+0.166783849 container start 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, name=rhceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, release=553, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph) Oct 14 06:06:46 localhost podman[316269]: 2025-10-14 10:06:46.024775946 +0000 UTC m=+0.167907688 container attach 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:06:46 localhost silly_maxwell[316284]: 167 167 Oct 14 06:06:46 localhost systemd[1]: libpod-3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38.scope: Deactivated successfully. Oct 14 06:06:46 localhost podman[316269]: 2025-10-14 10:06:46.027251421 +0000 UTC m=+0.170383183 container died 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, version=7, description=Red Hat Ceph Storage 7) Oct 14 06:06:46 localhost podman[316289]: 2025-10-14 10:06:46.131886519 +0000 UTC m=+0.091324328 container remove 3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_maxwell, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=7, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:06:46 localhost systemd[1]: libpod-conmon-3fe67b155ec642b1110b887b4ddbba87632c15ea973911a22335c348f0b16c38.scope: Deactivated successfully. Oct 14 06:06:46 localhost ceph-mon[307093]: Reconfiguring osd.2 (monmap changed)... Oct 14 06:06:46 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:06:46 localhost ceph-mon[307093]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s)) Oct 14 06:06:46 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:46 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:46 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:46 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:06:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:46 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:06:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:06:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:06:46 localhost podman[316391]: Oct 14 06:06:46 localhost podman[316391]: 2025-10-14 10:06:46.968512696 +0000 UTC m=+0.069642857 container create 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, release=553, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:06:46 localhost systemd[1]: var-lib-containers-storage-overlay-5f4322381e367d871b1366bdc11998414af2ea1f44ffb68cdfc22b0212e94718-merged.mount: Deactivated successfully. Oct 14 06:06:47 localhost systemd[1]: Started libpod-conmon-7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571.scope. Oct 14 06:06:47 localhost podman[316363]: 2025-10-14 10:06:47.0274513 +0000 UTC m=+0.164002735 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:06:47 localhost systemd[1]: Started libcrun container. Oct 14 06:06:47 localhost podman[316391]: 2025-10-14 10:06:46.948219001 +0000 UTC m=+0.049349202 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:47 localhost podman[316391]: 2025-10-14 10:06:47.05209613 +0000 UTC m=+0.153226291 container init 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, RELEASE=main, io.buildah.version=1.33.12) Oct 14 06:06:47 localhost podman[316391]: 2025-10-14 10:06:47.062755711 +0000 UTC m=+0.163885922 container start 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:06:47 localhost zealous_roentgen[316417]: 167 167 Oct 14 06:06:47 localhost podman[316391]: 2025-10-14 10:06:47.063231893 +0000 UTC m=+0.164362084 container attach 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 14 06:06:47 localhost systemd[1]: libpod-7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571.scope: Deactivated successfully. Oct 14 06:06:47 localhost podman[316391]: 2025-10-14 10:06:47.066173431 +0000 UTC m=+0.167303652 container died 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , version=7, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:06:47 localhost podman[316364]: 2025-10-14 10:06:47.120081282 +0000 UTC m=+0.256270677 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller) Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost podman[316363]: 2025-10-14 10:06:47.148833739 +0000 UTC m=+0.285385214 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350) Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:47 localhost podman[316364]: 2025-10-14 10:06:47.157023125 +0000 UTC m=+0.293212570 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:06:47 localhost ceph-mon[307093]: Reconfiguring osd.4 (monmap changed)... Oct 14 06:06:47 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:06:47 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486731.onyaog", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:06:47 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost podman[316429]: 2025-10-14 10:06:47.214782699 +0000 UTC m=+0.140277520 container remove 7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_roentgen, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, release=553, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost systemd[1]: libpod-conmon-7493bfe3c23037e8a63a7ec71f6cb22d2bee284efa25e0f31955d2a074569571.scope: Deactivated successfully. Oct 14 06:06:47 localhost podman[316365]: 2025-10-14 10:06:47.179882738 +0000 UTC m=+0.314032970 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:47 localhost podman[316365]: 2025-10-14 10:06:47.264347875 +0000 UTC m=+0.398498137 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:06:47 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "mgr services"} : dispatch Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:47 localhost podman[316522]: Oct 14 06:06:47 localhost podman[316522]: 2025-10-14 10:06:47.95567355 +0000 UTC m=+0.077199065 container create d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 14 06:06:47 localhost systemd[1]: var-lib-containers-storage-overlay-2a8c2a49540fa3836d8fc7f87c0b3781692aec614f85a1a064530311a8b8c46b-merged.mount: Deactivated successfully. Oct 14 06:06:47 localhost systemd[1]: Started libpod-conmon-d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b.scope. Oct 14 06:06:48 localhost systemd[1]: Started libcrun container. Oct 14 06:06:48 localhost podman[316522]: 2025-10-14 10:06:48.023847318 +0000 UTC m=+0.145372843 container init d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:06:48 localhost podman[316522]: 2025-10-14 10:06:47.924360365 +0000 UTC m=+0.045885910 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:48 localhost podman[316522]: 2025-10-14 10:06:48.032849586 +0000 UTC m=+0.154375131 container start d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, name=rhceph, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:06:48 localhost podman[316522]: 2025-10-14 10:06:48.033223865 +0000 UTC m=+0.154749390 container attach d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:06:48 localhost exciting_ellis[316537]: 167 167 Oct 14 06:06:48 localhost systemd[1]: libpod-d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b.scope: Deactivated successfully. Oct 14 06:06:48 localhost podman[316522]: 2025-10-14 10:06:48.036244255 +0000 UTC m=+0.157769800 container died d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 14 06:06:48 localhost podman[316542]: 2025-10-14 10:06:48.134982098 +0000 UTC m=+0.086796820 container remove d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_ellis, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:06:48 localhost systemd[1]: libpod-conmon-d35208bcd7afe0c56b96a67737f63139182085c42446d0009b620c50f9cd141b.scope: Deactivated successfully. Oct 14 06:06:48 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486731.onyaog (monmap changed)... Oct 14 06:06:48 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486731.onyaog on np0005486731.localdomain Oct 14 06:06:48 localhost ceph-mon[307093]: Reconfig service osd.default_drive_group Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486731.swasqz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3042893165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3042893165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e82 do_prune osdmap full prune enabled Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Activating manager daemon np0005486733.primvu Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 e83: 6 total, 6 up, 6 in Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e83: 6 total, 6 up, 6 in Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e32: np0005486733.primvu(active, starting, since 0.0673541s), standbys: np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Manager daemon np0005486733.primvu is now available Oct 14 06:06:48 localhost systemd[1]: session-71.scope: Deactivated successfully. Oct 14 06:06:48 localhost systemd[1]: session-71.scope: Consumed 27.751s CPU time. Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:48 localhost systemd-logind[760]: Session 71 logged out. Waiting for processes to exit. Oct 14 06:06:48 localhost systemd-logind[760]: Removed session 71. Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"}]': finished Oct 14 06:06:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} v 0) Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"}]': finished Oct 14 06:06:48 localhost systemd[1]: var-lib-containers-storage-overlay-fe121d4a3ebce9237d7b7c52cedc10509db642cd2b01ee6d98a692b60e9627e0-merged.mount: Deactivated successfully. Oct 14 06:06:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/mirror_snapshot_schedule"} v 0) Oct 14 06:06:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/mirror_snapshot_schedule"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/trash_purge_schedule"} v 0) Oct 14 06:06:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/trash_purge_schedule"} : dispatch Oct 14 06:06:49 localhost sshd[316558]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:06:49 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486731.swasqz (monmap changed)... Oct 14 06:06:49 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486731.swasqz on np0005486731.localdomain Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' Oct 14 06:06:49 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17415 172.18.0.107:0/230210271' entity='mgr.np0005486732.pasqzz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:06:49 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: Activating manager daemon np0005486733.primvu Oct 14 06:06:49 localhost ceph-mon[307093]: from='client.? 172.18.0.200:0/664587782' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:06:49 localhost ceph-mon[307093]: Manager daemon np0005486733.primvu is now available Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"}]': finished Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005486730.localdomain.devices.0"}]': finished Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/mirror_snapshot_schedule"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/mirror_snapshot_schedule"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/trash_purge_schedule"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486733.primvu/trash_purge_schedule"} : dispatch Oct 14 06:06:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:49 localhost systemd-logind[760]: New session 72 of user ceph-admin. Oct 14 06:06:49 localhost systemd[1]: Started Session 72 of User ceph-admin. Oct 14 06:06:49 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e33: np0005486733.primvu(active, since 1.16207s), standbys: np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:06:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:06:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: removing stray HostCache host record np0005486730.localdomain.devices.0 Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 2 stray host(s) with 2 daemon(s) not managed by cephadm) Oct 14 06:06:50 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Cluster is now healthy Oct 14 06:06:51 localhost podman[316728]: 2025-10-14 10:06:51.018180838 +0000 UTC m=+0.093904226 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=553, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.component=rhceph-container) Oct 14 06:06:51 localhost podman[316728]: 2025-10-14 10:06:51.144377166 +0000 UTC m=+0.220100494 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 14 06:06:51 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) Oct 14 06:06:51 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_HOST (was: 2 stray host(s) with 2 daemon(s) not managed by cephadm) Oct 14 06:06:51 localhost ceph-mon[307093]: Cluster is now healthy Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e34: np0005486733.primvu(active, since 3s), standbys: np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz Oct 14 06:06:52 localhost ceph-mon[307093]: [14/Oct/2025:10:06:51] ENGINE Bus STARTING Oct 14 06:06:52 localhost ceph-mon[307093]: [14/Oct/2025:10:06:51] ENGINE Serving on http://172.18.0.108:8765 Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:52 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:06:53 localhost ceph-mon[307093]: [14/Oct/2025:10:06:51] ENGINE Serving on https://172.18.0.108:7150 Oct 14 06:06:53 localhost ceph-mon[307093]: [14/Oct/2025:10:06:51] ENGINE Bus STARTED Oct 14 06:06:53 localhost ceph-mon[307093]: [14/Oct/2025:10:06:51] ENGINE Client ('172.18.0.108', 54724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:06:53 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : Standby manager daemon np0005486732.pasqzz started Oct 14 06:06:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:54 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:06:54 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:06:54 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:06:54 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:06:54 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:06:54 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:06:54 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:54 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:54 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:06:54 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e35: np0005486733.primvu(active, since 5s), standbys: np0005486728.giajub, np0005486729.xpybho, np0005486730.ddfidc, np0005486731.swasqz, np0005486732.pasqzz Oct 14 06:06:55 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:55 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:55 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:06:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:06:56 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:06:56 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:06:56 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:56 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:57 localhost podman[317685]: Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.104318639 +0000 UTC m=+0.079081446 container create 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=) Oct 14 06:06:57 localhost systemd[1]: Started libpod-conmon-7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2.scope. Oct 14 06:06:57 localhost systemd[1]: Started libcrun container. Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.169206499 +0000 UTC m=+0.143969306 container init 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, io.buildah.version=1.33.12, release=553, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.07440505 +0000 UTC m=+0.049167887 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.181457293 +0000 UTC m=+0.156220110 container start 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main) Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.181856133 +0000 UTC m=+0.156618980 container attach 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git) Oct 14 06:06:57 localhost nifty_galileo[317700]: 167 167 Oct 14 06:06:57 localhost systemd[1]: libpod-7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2.scope: Deactivated successfully. Oct 14 06:06:57 localhost podman[317685]: 2025-10-14 10:06:57.185019066 +0000 UTC m=+0.159781923 container died 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, ceph=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55) Oct 14 06:06:57 localhost nova_compute[295778]: 2025-10-14 10:06:57.192 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health check failed: 3 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(cluster) log [WRN] : Health check failed: 3 stray host(s) with 3 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 14 06:06:57 localhost podman[317705]: 2025-10-14 10:06:57.279499847 +0000 UTC m=+0.086064050 container remove 7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_galileo, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, architecture=x86_64, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Oct 14 06:06:57 localhost systemd[1]: libpod-conmon-7a19e68c296342348735ef6f3708a40acdc16d307823b1b2b80e2968b2f960d2.scope: Deactivated successfully. Oct 14 06:06:57 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:06:57 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:06:57 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 14 06:06:57 localhost ceph-mon[307093]: Health check failed: 3 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 14 06:06:57 localhost ceph-mon[307093]: Health check failed: 3 stray host(s) with 3 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 14 06:06:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:57 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:06:57.634 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:06:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:06:57.634 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:06:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:06:57.634 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:06:58 localhost systemd[1]: var-lib-containers-storage-overlay-0d8054fce2e0b53a8157fcf197f9d5bd017b3772a3fbfb7720f5513caf426259-merged.mount: Deactivated successfully. Oct 14 06:06:58 localhost podman[317783]: Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.128350496 +0000 UTC m=+0.080808002 container create 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, version=7, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 14 06:06:58 localhost systemd[1]: Started libpod-conmon-422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa.scope. Oct 14 06:06:58 localhost systemd[1]: Started libcrun container. Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.188934463 +0000 UTC m=+0.141391959 container init 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.097146022 +0000 UTC m=+0.049603578 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.198330691 +0000 UTC m=+0.150788187 container start 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, release=553, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.198596788 +0000 UTC m=+0.151054334 container attach 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 14 06:06:58 localhost happy_shtern[317798]: 167 167 Oct 14 06:06:58 localhost systemd[1]: libpod-422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa.scope: Deactivated successfully. Oct 14 06:06:58 localhost podman[317783]: 2025-10-14 10:06:58.201234557 +0000 UTC m=+0.153692053 container died 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=rhceph) Oct 14 06:06:58 localhost podman[317803]: 2025-10-14 10:06:58.296375066 +0000 UTC m=+0.083199105 container remove 422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_shtern, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, architecture=x86_64, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 14 06:06:58 localhost systemd[1]: libpod-conmon-422c2b8e7ff6086da11ca883b44a4d2c6597f24010ec60970f2822d18c1d92aa.scope: Deactivated successfully. Oct 14 06:06:58 localhost ceph-mon[307093]: Reconfiguring daemon osd.2 on np0005486731.localdomain Oct 14 06:06:58 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:06:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:06:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:06:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:06:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost systemd[1]: var-lib-containers-storage-overlay-82d44e0345483ccca7062dede07cafaf902d02623cf2371b094705008fde4474-merged.mount: Deactivated successfully. Oct 14 06:06:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:06:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:06:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:06:59 localhost ceph-mon[307093]: Reconfiguring daemon osd.4 on np0005486731.localdomain Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486732.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:06:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost ceph-mon[307093]: Reconfiguring crash.np0005486732 (monmap changed)... Oct 14 06:07:00 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486732 on np0005486732.localdomain Oct 14 06:07:00 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost ceph-mon[307093]: Reconfiguring osd.1 (monmap changed)... Oct 14 06:07:00 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 14 06:07:00 localhost ceph-mon[307093]: Reconfiguring daemon osd.1 on np0005486732.localdomain Oct 14 06:07:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:00 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:00 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:00 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:00 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:00 localhost podman[246584]: time="2025-10-14T10:07:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:07:00 localhost podman[246584]: @ - - [14/Oct/2025:10:07:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:07:00 localhost podman[246584]: @ - - [14/Oct/2025:10:07:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18340 "" "Go-http-client/1.1" Oct 14 06:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:07:01 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: Reconfiguring osd.5 (monmap changed)... Oct 14 06:07:01 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 14 06:07:01 localhost ceph-mon[307093]: Reconfiguring daemon osd.5 on np0005486732.localdomain Oct 14 06:07:01 localhost podman[317827]: 2025-10-14 10:07:01.541956869 +0000 UTC m=+0.081846599 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:07:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:01 localhost podman[317827]: 2025-10-14 10:07:01.558158346 +0000 UTC m=+0.098048056 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:07:01 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:07:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:07:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:07:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486732.xkownj (monmap changed)... Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486732.xkownj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486732.xkownj on np0005486732.localdomain Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486732.pasqzz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:07:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost openstack_network_exporter[248748]: ERROR 10:07:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:07:03 localhost openstack_network_exporter[248748]: ERROR 10:07:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:07:03 localhost openstack_network_exporter[248748]: ERROR 10:07:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:07:03 localhost openstack_network_exporter[248748]: ERROR 10:07:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:07:03 localhost openstack_network_exporter[248748]: Oct 14 06:07:03 localhost openstack_network_exporter[248748]: ERROR 10:07:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:07:03 localhost openstack_network_exporter[248748]: Oct 14 06:07:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486732.pasqzz (monmap changed)... Oct 14 06:07:03 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486732.pasqzz on np0005486732.localdomain Oct 14 06:07:03 localhost ceph-mon[307093]: Saving service mon spec with placement label:mon Oct 14 06:07:03 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:03 localhost ceph-mon[307093]: Reconfiguring mon.np0005486732 (monmap changed)... Oct 14 06:07:03 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:07:03 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486732 on np0005486732.localdomain Oct 14 06:07:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:07:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:07:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 14 06:07:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:07:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:05 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:05 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:05 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:05 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:05 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:07:05 localhost ceph-mon[307093]: Reconfiguring crash.np0005486733 (monmap changed)... Oct 14 06:07:05 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005486733.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 14 06:07:05 localhost ceph-mon[307093]: Reconfiguring daemon crash.np0005486733 on np0005486733.localdomain Oct 14 06:07:05 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:07:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:07:05 localhost podman[317845]: 2025-10-14 10:07:05.540645769 +0000 UTC m=+0.081032597 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 06:07:05 localhost podman[317845]: 2025-10-14 10:07:05.545921107 +0000 UTC m=+0.086307985 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:07:05 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:07:05 localhost podman[317846]: 2025-10-14 10:07:05.593665656 +0000 UTC m=+0.131444085 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:07:05 localhost podman[317846]: 2025-10-14 10:07:05.60517369 +0000 UTC m=+0.142952149 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:07:05 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:07:06 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:06 localhost ceph-mon[307093]: Reconfiguring osd.0 (monmap changed)... Oct 14 06:07:06 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 14 06:07:06 localhost ceph-mon[307093]: Reconfiguring daemon osd.0 on np0005486733.localdomain Oct 14 06:07:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: Reconfiguring osd.3 (monmap changed)... Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 14 06:07:07 localhost ceph-mon[307093]: Reconfiguring daemon osd.3 on np0005486733.localdomain Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 14 06:07:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:08 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:08 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:08 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:08 localhost ceph-mon[307093]: Reconfiguring mds.mds.np0005486733.tvstmf (monmap changed)... Oct 14 06:07:08 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005486733.tvstmf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 14 06:07:08 localhost ceph-mon[307093]: Reconfiguring daemon mds.mds.np0005486733.tvstmf on np0005486733.localdomain Oct 14 06:07:08 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 14 06:07:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:09 localhost ceph-mon[307093]: Reconfiguring mgr.np0005486733.primvu (monmap changed)... Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005486733.primvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 14 06:07:09 localhost ceph-mon[307093]: Reconfiguring daemon mgr.np0005486733.primvu on np0005486733.localdomain Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:09 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:07:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:07:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:07:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:07:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 14 06:07:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: Reconfiguring mon.np0005486733 (monmap changed)... Oct 14 06:07:10 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486733 on np0005486733.localdomain Oct 14 06:07:10 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:07:10 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:10 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:11 localhost podman[317955]: Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.187715113 +0000 UTC m=+0.078514871 container create 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Oct 14 06:07:11 localhost systemd[1]: Started libpod-conmon-423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e.scope. Oct 14 06:07:11 localhost systemd[1]: Started libcrun container. Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.251894875 +0000 UTC m=+0.142694643 container init 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.154806366 +0000 UTC m=+0.045606164 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.261002735 +0000 UTC m=+0.151802493 container start 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, release=553) Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.261239752 +0000 UTC m=+0.152039520 container attach 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, release=553, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:07:11 localhost eager_napier[317970]: 167 167 Oct 14 06:07:11 localhost systemd[1]: libpod-423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e.scope: Deactivated successfully. Oct 14 06:07:11 localhost podman[317955]: 2025-10-14 10:07:11.265529735 +0000 UTC m=+0.156329553 container died 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main) Oct 14 06:07:11 localhost ceph-mon[307093]: from='mgr.17433 172.18.0.108:0/2728758967' entity='mgr.np0005486733.primvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 14 06:07:11 localhost podman[317975]: 2025-10-14 10:07:11.365034939 +0000 UTC m=+0.090623541 container remove 423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_napier, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, vcs-type=git, name=rhceph, io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 14 06:07:11 localhost systemd[1]: libpod-conmon-423afe244cb4639a3035f5a595d97acbda08473d061cdcc2b265ce6d1c06975e.scope: Deactivated successfully. Oct 14 06:07:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:07:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:07:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:12 localhost systemd[1]: var-lib-containers-storage-overlay-c91888238b0547e22c991989e5cc482af26c455df242d57884a312e42e7e0d03-merged.mount: Deactivated successfully. Oct 14 06:07:12 localhost ceph-mon[307093]: Reconfiguring mon.np0005486731 (monmap changed)... Oct 14 06:07:12 localhost ceph-mon[307093]: Reconfiguring daemon mon.np0005486731 on np0005486731.localdomain Oct 14 06:07:12 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:12 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:07:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:07:13 localhost podman[317993]: 2025-10-14 10:07:13.54806729 +0000 UTC m=+0.089509450 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:07:13 localhost podman[317993]: 2025-10-14 10:07:13.564221097 +0000 UTC m=+0.105663257 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:07:13 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:07:13 localhost podman[317992]: 2025-10-14 10:07:13.646817434 +0000 UTC m=+0.189390614 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS) Oct 14 06:07:13 localhost podman[317992]: 2025-10-14 10:07:13.655747289 +0000 UTC m=+0.198320479 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:07:13 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:07:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:07:14 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.078756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434078794, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2579, "num_deletes": 254, "total_data_size": 6530519, "memory_usage": 6853280, "flush_reason": "Manual Compaction"} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434116694, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 5832644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18955, "largest_seqno": 21529, "table_properties": {"data_size": 5820855, "index_size": 7334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30204, "raw_average_key_size": 22, "raw_value_size": 5795181, "raw_average_value_size": 4337, "num_data_blocks": 320, "num_entries": 1336, "num_filter_entries": 1336, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436375, "oldest_key_time": 1760436375, "file_creation_time": 1760436434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 38047 microseconds, and 11411 cpu microseconds. Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.116798) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 5832644 bytes OK Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.116827) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.118809) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.118837) EVENT_LOG_v1 {"time_micros": 1760436434118829, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.118864) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 6518452, prev total WAL file size 6518452, number of live WAL files 2. Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.120565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(5695KB)], [33(14MB)] Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434120652, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 21018591, "oldest_snapshot_seqno": -1} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12043 keys, 18827036 bytes, temperature: kUnknown Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434226800, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 18827036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18758695, "index_size": 37136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 324113, "raw_average_key_size": 26, "raw_value_size": 18553893, "raw_average_value_size": 1540, "num_data_blocks": 1409, "num_entries": 12043, "num_filter_entries": 12043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.227172) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 18827036 bytes Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.228910) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.8 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.6, 14.5 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 12589, records dropped: 546 output_compression: NoCompression Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.228943) EVENT_LOG_v1 {"time_micros": 1760436434228929, "job": 18, "event": "compaction_finished", "compaction_time_micros": 106274, "compaction_time_cpu_micros": 50790, "output_level": 6, "num_output_files": 1, "total_output_size": 18827036, "num_input_records": 12589, "num_output_records": 12043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434230317, "job": 18, "event": "table_file_deletion", "file_number": 35} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436434233287, "job": 18, "event": "table_file_deletion", "file_number": 33} Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.120457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.233359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.233365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.233368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.233372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:07:14.233374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:07:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e36: np0005486733.primvu(active, since 25s), standbys: np0005486731.swasqz, np0005486732.pasqzz Oct 14 06:07:15 localhost ceph-mon[307093]: from='mgr.17433 ' entity='mgr.np0005486733.primvu' Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.923 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.923 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.923 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:07:16 localhost nova_compute[295778]: 2025-10-14 10:07:16.924 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:07:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:07:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/281528741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.390 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:07:17 localhost podman[318055]: 2025-10-14 10:07:17.556201629 +0000 UTC m=+0.088676379 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:07:17 localhost podman[318054]: 2025-10-14 10:07:17.587865513 +0000 UTC m=+0.127764399 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, architecture=x86_64, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:07:17 localhost podman[318059]: 2025-10-14 10:07:17.613206381 +0000 UTC m=+0.140694449 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:07:17 localhost podman[318055]: 2025-10-14 10:07:17.62757054 +0000 UTC m=+0.160045300 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:07:17 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.645 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.647 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12261MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.647 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.648 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:07:17 localhost podman[318059]: 2025-10-14 10:07:17.650032412 +0000 UTC m=+0.177520440 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:07:17 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:07:17 localhost podman[318054]: 2025-10-14 10:07:17.671785896 +0000 UTC m=+0.211684832 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9) Oct 14 06:07:17 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.972 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:07:17 localhost nova_compute[295778]: 2025-10-14 10:07:17.973 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.277 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.521 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.521 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.538 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.560 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:07:18 localhost nova_compute[295778]: 2025-10-14 10:07:18.578 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:07:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:07:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1470843020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:07:19 localhost nova_compute[295778]: 2025-10-14 10:07:19.026 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:07:19 localhost nova_compute[295778]: 2025-10-14 10:07:19.032 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:07:19 localhost nova_compute[295778]: 2025-10-14 10:07:19.060 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:07:19 localhost nova_compute[295778]: 2025-10-14 10:07:19.063 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:07:19 localhost nova_compute[295778]: 2025-10-14 10:07:19.063 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:07:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:20 localhost nova_compute[295778]: 2025-10-14 10:07:20.064 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:20 localhost nova_compute[295778]: 2025-10-14 10:07:20.065 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:20 localhost nova_compute[295778]: 2025-10-14 10:07:20.065 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:21 localhost nova_compute[295778]: 2025-10-14 10:07:21.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:21 localhost nova_compute[295778]: 2025-10-14 10:07:21.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:07:22 localhost nova_compute[295778]: 2025-10-14 10:07:22.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:22 localhost nova_compute[295778]: 2025-10-14 10:07:22.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:22 localhost nova_compute[295778]: 2025-10-14 10:07:22.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:24 localhost nova_compute[295778]: 2025-10-14 10:07:24.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:07:24 localhost nova_compute[295778]: 2025-10-14 10:07:24.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:07:24 localhost nova_compute[295778]: 2025-10-14 10:07:24.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:07:24 localhost nova_compute[295778]: 2025-10-14 10:07:24.975 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:07:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:30 localhost podman[246584]: time="2025-10-14T10:07:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:07:30 localhost podman[246584]: @ - - [14/Oct/2025:10:07:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:07:30 localhost podman[246584]: @ - - [14/Oct/2025:10:07:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18349 "" "Go-http-client/1.1" Oct 14 06:07:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:07:32 localhost systemd[1]: tmp-crun.MXbhwI.mount: Deactivated successfully. Oct 14 06:07:32 localhost podman[318141]: 2025-10-14 10:07:32.54869253 +0000 UTC m=+0.085136205 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:07:32 localhost podman[318141]: 2025-10-14 10:07:32.563169473 +0000 UTC m=+0.099613178 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 14 06:07:32 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:07:33 localhost openstack_network_exporter[248748]: ERROR 10:07:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:07:33 localhost openstack_network_exporter[248748]: ERROR 10:07:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:07:33 localhost openstack_network_exporter[248748]: ERROR 10:07:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:07:33 localhost openstack_network_exporter[248748]: ERROR 10:07:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:07:33 localhost openstack_network_exporter[248748]: Oct 14 06:07:33 localhost openstack_network_exporter[248748]: ERROR 10:07:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:07:33 localhost openstack_network_exporter[248748]: Oct 14 06:07:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:07:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:07:36 localhost podman[318161]: 2025-10-14 10:07:36.530545387 +0000 UTC m=+0.066340141 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:07:36 localhost podman[318161]: 2025-10-14 10:07:36.544585307 +0000 UTC m=+0.080380061 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:07:36 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:07:36 localhost podman[318160]: 2025-10-14 10:07:36.600991304 +0000 UTC m=+0.139503919 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:07:36 localhost podman[318160]: 2025-10-14 10:07:36.608122842 +0000 UTC m=+0.146635517 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 06:07:36 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:07:37 localhost sshd[318200]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:07:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:07:44 localhost podman[318202]: 2025-10-14 10:07:44.542273853 +0000 UTC m=+0.082366263 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:07:44 localhost podman[318202]: 2025-10-14 10:07:44.552694988 +0000 UTC m=+0.092787478 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:07:44 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:07:44 localhost podman[318201]: 2025-10-14 10:07:44.646353876 +0000 UTC m=+0.188858369 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:07:44 localhost podman[318201]: 2025-10-14 10:07:44.656809752 +0000 UTC m=+0.199314265 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:07:44 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:07:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:07:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:07:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:07:48 localhost systemd[1]: tmp-crun.ZHp0DL.mount: Deactivated successfully. Oct 14 06:07:48 localhost podman[318239]: 2025-10-14 10:07:48.566131195 +0000 UTC m=+0.105579785 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Oct 14 06:07:48 localhost systemd[1]: tmp-crun.gqEPSV.mount: Deactivated successfully. Oct 14 06:07:48 localhost podman[318241]: 2025-10-14 10:07:48.614902102 +0000 UTC m=+0.142024386 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:07:48 localhost podman[318239]: 2025-10-14 10:07:48.630491592 +0000 UTC m=+0.169940202 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:07:48 localhost podman[318240]: 2025-10-14 10:07:48.652639716 +0000 UTC m=+0.184008652 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:07:48 localhost podman[318241]: 2025-10-14 10:07:48.676486015 +0000 UTC m=+0.203608299 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:07:48 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:07:48 localhost podman[318240]: 2025-10-14 10:07:48.698185576 +0000 UTC m=+0.229554522 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:07:48 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:07:48 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:07:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:07:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:07:57.633 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:07:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:07:57.634 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:07:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:07:57.634 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:07:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:00 localhost podman[246584]: time="2025-10-14T10:08:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:08:00 localhost podman[246584]: @ - - [14/Oct/2025:10:08:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:08:00 localhost podman[246584]: @ - - [14/Oct/2025:10:08:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18343 "" "Go-http-client/1.1" Oct 14 06:08:03 localhost openstack_network_exporter[248748]: ERROR 10:08:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:08:03 localhost openstack_network_exporter[248748]: ERROR 10:08:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:08:03 localhost openstack_network_exporter[248748]: ERROR 10:08:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:08:03 localhost openstack_network_exporter[248748]: ERROR 10:08:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:08:03 localhost openstack_network_exporter[248748]: Oct 14 06:08:03 localhost openstack_network_exporter[248748]: ERROR 10:08:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:08:03 localhost openstack_network_exporter[248748]: Oct 14 06:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:08:03 localhost podman[318302]: 2025-10-14 10:08:03.548943543 +0000 UTC m=+0.086513151 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:08:03 localhost podman[318302]: 2025-10-14 10:08:03.587105419 +0000 UTC m=+0.124675047 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 14 06:08:03 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:08:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:08:07 localhost podman[318321]: 2025-10-14 10:08:07.540891074 +0000 UTC m=+0.081285624 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible) Oct 14 06:08:07 localhost podman[318322]: 2025-10-14 10:08:07.598241806 +0000 UTC m=+0.130597725 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:08:07 localhost podman[318321]: 2025-10-14 10:08:07.622456194 +0000 UTC m=+0.162850744 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Oct 14 06:08:07 localhost podman[318322]: 2025-10-14 10:08:07.631089032 +0000 UTC m=+0.163444951 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:08:07 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:08:07 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:08:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 14 06:08:08 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:08:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e83 do_prune osdmap full prune enabled Oct 14 06:08:08 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Activating manager daemon np0005486731.swasqz Oct 14 06:08:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 e84: 6 total, 6 up, 6 in Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr handle_mgr_map Activating! Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr handle_mgr_map I am now activating Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e84: 6 total, 6 up, 6 in Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e37: np0005486731.swasqz(active, starting, since 0.0269738s), standbys: np0005486732.pasqzz Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486731"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mon metadata", "id": "np0005486731"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486732"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mon metadata", "id": "np0005486732"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005486733"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mon metadata", "id": "np0005486733"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486733.tvstmf"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mds metadata", "who": "mds.np0005486733.tvstmf"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).mds e16 all = 0 Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486731.onyaog"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mds metadata", "who": "mds.np0005486731.onyaog"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).mds e16 all = 0 Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005486732.xkownj"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mds metadata", "who": "mds.np0005486732.xkownj"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).mds e16 all = 0 Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486731.swasqz", "id": "np0005486731.swasqz"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mgr metadata", "who": "np0005486731.swasqz", "id": "np0005486731.swasqz"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486732.pasqzz", "id": "np0005486732.pasqzz"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mgr metadata", "who": "np0005486732.pasqzz", "id": "np0005486732.pasqzz"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 0} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 1} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 2} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 3} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 4} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mds metadata"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mds metadata"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).mds e16 all = 1 Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd metadata"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd metadata"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon metadata"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mon metadata"} : dispatch Oct 14 06:08:09 localhost ceph-mgr[300442]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: balancer Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Manager daemon np0005486731.swasqz is now available Oct 14 06:08:09 localhost ceph-mgr[300442]: [balancer INFO root] Starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:08:09 Oct 14 06:08:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:08:09 localhost ceph-mgr[300442]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Oct 14 06:08:09 localhost ceph-mgr[300442]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost systemd[1]: session-72.scope: Deactivated successfully. Oct 14 06:08:09 localhost systemd[1]: session-72.scope: Consumed 8.782s CPU time. Oct 14 06:08:09 localhost systemd-logind[760]: Session 72 logged out. Waiting for processes to exit. Oct 14 06:08:09 localhost systemd-logind[760]: Removed session 72. Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: cephadm Oct 14 06:08:09 localhost ceph-mgr[300442]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: crash Oct 14 06:08:09 localhost ceph-mgr[300442]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: devicehealth Oct 14 06:08:09 localhost ceph-mgr[300442]: [devicehealth INFO root] Starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: iostat Oct 14 06:08:09 localhost ceph-mgr[300442]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: nfs Oct 14 06:08:09 localhost ceph-mgr[300442]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: orchestrator Oct 14 06:08:09 localhost ceph-mgr[300442]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: pg_autoscaler Oct 14 06:08:09 localhost ceph-mgr[300442]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: progress Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] recovery thread starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] starting setup Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/mirror_snapshot_schedule"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/mirror_snapshot_schedule"} : dispatch Oct 14 06:08:09 localhost ceph-mgr[300442]: [progress INFO root] Loading... Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: rbd_support Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Oct 14 06:08:09 localhost ceph-mgr[300442]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: restful Oct 14 06:08:09 localhost ceph-mgr[300442]: [progress INFO root] Loaded OSDMap, ready. Oct 14 06:08:09 localhost ceph-mgr[300442]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: status Oct 14 06:08:09 localhost ceph-mgr[300442]: [restful INFO root] server_addr: :: server_port: 8003 Oct 14 06:08:09 localhost ceph-mgr[300442]: [restful WARNING root] server not running: no certificate configured Oct 14 06:08:09 localhost ceph-mgr[300442]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: telemetry Oct 14 06:08:09 localhost ceph-mgr[300442]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:08:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:08:09 localhost ceph-mgr[300442]: mgr load Constructed class from module: volumes Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] PerfHandler: starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: vms, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.163+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.163+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.163+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.163+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.163+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.164+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.164+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.164+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.164+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:08:09.164+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: volumes, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: images, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_task_task: backups, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TaskHandler: starting Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/trash_purge_schedule"} v 0) Oct 14 06:08:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/trash_purge_schedule"} : dispatch Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Oct 14 06:08:09 localhost ceph-mgr[300442]: [rbd_support INFO root] setup complete Oct 14 06:08:09 localhost sshd[318499]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:08:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:09 localhost systemd-logind[760]: New session 73 of user ceph-admin. Oct 14 06:08:09 localhost systemd[1]: Started Session 73 of User ceph-admin. Oct 14 06:08:09 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: Activating manager daemon np0005486731.swasqz Oct 14 06:08:09 localhost ceph-mon[307093]: from='client.? 172.18.0.200:0/2791224686' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 14 06:08:09 localhost ceph-mon[307093]: Manager daemon np0005486731.swasqz is now available Oct 14 06:08:09 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/mirror_snapshot_schedule"} : dispatch Oct 14 06:08:09 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005486731.swasqz/trash_purge_schedule"} : dispatch Oct 14 06:08:10 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e38: np0005486731.swasqz(active, since 1.04236s), standbys: np0005486732.pasqzz Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:10 localhost podman[318609]: 2025-10-14 10:08:10.33288145 +0000 UTC m=+0.086222775 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, vcs-type=git, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 14 06:08:10 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:08:10] ENGINE Bus STARTING Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:08:10] ENGINE Bus STARTING Oct 14 06:08:10 localhost podman[318609]: 2025-10-14 10:08:10.467226232 +0000 UTC m=+0.220567547 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, release=553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Oct 14 06:08:10 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:08:10] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:08:10] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:08:10 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:08:10] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:08:10] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:08:10 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:08:10] ENGINE Bus STARTED Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:08:10] ENGINE Bus STARTED Oct 14 06:08:10 localhost ceph-mgr[300442]: [cephadm INFO cherrypy.error] [14/Oct/2025:10:08:10] ENGINE Client ('172.18.0.106', 60988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:08:10 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : [14/Oct/2025:10:08:10] ENGINE Client ('172.18.0.106', 60988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 3 stray daemon(s) not managed by cephadm) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 3 stray host(s) with 3 daemon(s) not managed by cephadm) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : Cluster is now healthy Oct 14 06:08:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: [14/Oct/2025:10:08:10] ENGINE Bus STARTING Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:11 localhost ceph-mgr[300442]: [devicehealth INFO root] Check health Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:08:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: [14/Oct/2025:10:08:10] ENGINE Serving on http://172.18.0.106:8765 Oct 14 06:08:12 localhost ceph-mon[307093]: [14/Oct/2025:10:08:10] ENGINE Serving on https://172.18.0.106:7150 Oct 14 06:08:12 localhost ceph-mon[307093]: [14/Oct/2025:10:08:10] ENGINE Bus STARTED Oct 14 06:08:12 localhost ceph-mon[307093]: [14/Oct/2025:10:08:10] ENGINE Client ('172.18.0.106', 60988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 14 06:08:12 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 3 stray daemon(s) not managed by cephadm) Oct 14 06:08:12 localhost ceph-mon[307093]: Health check cleared: CEPHADM_STRAY_HOST (was: 3 stray host(s) with 3 daemon(s) not managed by cephadm) Oct 14 06:08:12 localhost ceph-mon[307093]: Cluster is now healthy Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e39: np0005486731.swasqz(active, since 3s), standbys: np0005486732.pasqzz Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:08:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:08:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:12 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:12 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:08:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:08:13 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:13 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : Standby manager daemon np0005486733.primvu started Oct 14 06:08:14 localhost ceph-mgr[300442]: mgr.server handle_open ignoring open from mgr.np0005486733.primvu 172.18.0.108:0/40674664; not ready for session (expect reconnect) Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:08:14 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:08:14 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:14 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:14 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:08:14 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.conf Oct 14 06:08:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e40: np0005486731.swasqz(active, since 5s), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:08:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005486733.primvu", "id": "np0005486733.primvu"} v 0) Oct 14 06:08:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "mgr metadata", "who": "np0005486733.primvu", "id": "np0005486733.primvu"} : dispatch Oct 14 06:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:08:14 localhost ceph-mgr[300442]: [cephadm INFO cephadm.serve] Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:14 localhost podman[319366]: 2025-10-14 10:08:14.76783311 +0000 UTC m=+0.117269973 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:08:14 localhost podman[319366]: 2025-10-14 10:08:14.778995514 +0000 UTC m=+0.128432417 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:08:14 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:08:14 localhost podman[319397]: 2025-10-14 10:08:14.835176485 +0000 UTC m=+0.097796169 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 14 06:08:14 localhost podman[319397]: 2025-10-14 10:08:14.849090112 +0000 UTC m=+0.111709796 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:08:14 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:08:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:15 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:15 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 14 06:08:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 1b00c555-45d1-44e7-9d05-ddbcdf6d6235 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 1b00c555-45d1-44e7-9d05-ddbcdf6d6235 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] Completed event 1b00c555-45d1-44e7-9d05-ddbcdf6d6235 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 057ab0ae-eb87-49db-abab-9b24a8795092 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 057ab0ae-eb87-49db-abab-9b24a8795092 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:08:15 localhost ceph-mgr[300442]: [progress INFO root] Completed event 057ab0ae-eb87-49db-abab-9b24a8795092 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:08:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:08:15 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:08:16 localhost ceph-mon[307093]: Updating np0005486732.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:16 localhost ceph-mon[307093]: Updating np0005486731.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:16 localhost ceph-mon[307093]: Updating np0005486733.localdomain:/var/lib/ceph/fcadf6e2-9176-5818-a8d0-37b19acf8eaf/config/ceph.client.admin.keyring Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:08:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.924 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.926 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:08:17 localhost nova_compute[295778]: 2025-10-14 10:08:17.926 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:08:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:08:18 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/726926607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.379 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.590 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.591 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12221MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.592 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.592 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.674 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.674 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:08:18 localhost nova_compute[295778]: 2025-10-14 10:08:18.703 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:08:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Oct 14 06:08:19 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:08:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:08:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:08:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4118599065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:08:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:19 localhost nova_compute[295778]: 2025-10-14 10:08:19.145 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:08:19 localhost nova_compute[295778]: 2025-10-14 10:08:19.152 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:08:19 localhost nova_compute[295778]: 2025-10-14 10:08:19.193 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:08:19 localhost nova_compute[295778]: 2025-10-14 10:08:19.195 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:08:19 localhost nova_compute[295778]: 2025-10-14 10:08:19.196 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:08:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:08:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:08:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:08:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:08:19 localhost podman[319645]: 2025-10-14 10:08:19.545652559 +0000 UTC m=+0.080997297 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:08:19 localhost podman[319645]: 2025-10-14 10:08:19.587098711 +0000 UTC m=+0.122443479 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:08:19 localhost systemd[1]: tmp-crun.A5s0Tk.mount: Deactivated successfully. Oct 14 06:08:19 localhost podman[319647]: 2025-10-14 10:08:19.603593536 +0000 UTC m=+0.132166526 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:08:19 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:08:19 localhost podman[319647]: 2025-10-14 10:08:19.616100496 +0000 UTC m=+0.144673516 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:08:19 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:08:19 localhost podman[319646]: 2025-10-14 10:08:19.703335915 +0000 UTC m=+0.234059601 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller) Oct 14 06:08:19 localhost podman[319646]: 2025-10-14 10:08:19.798119474 +0000 UTC m=+0.328843130 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller) Oct 14 06:08:19 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:08:20 localhost nova_compute[295778]: 2025-10-14 10:08:20.196 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:20 localhost nova_compute[295778]: 2025-10-14 10:08:20.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:20 localhost nova_compute[295778]: 2025-10-14 10:08:20.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 14 06:08:21 localhost nova_compute[295778]: 2025-10-14 10:08:21.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:21 localhost nova_compute[295778]: 2025-10-14 10:08:21.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:08:22 localhost nova_compute[295778]: 2025-10-14 10:08:22.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 14 06:08:23 localhost nova_compute[295778]: 2025-10-14 10:08:23.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:23 localhost nova_compute[295778]: 2025-10-14 10:08:23.936 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:24 localhost nova_compute[295778]: 2025-10-14 10:08:24.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 14 06:08:25 localhost nova_compute[295778]: 2025-10-14 10:08:25.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:08:25 localhost nova_compute[295778]: 2025-10-14 10:08:25.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:08:25 localhost nova_compute[295778]: 2025-10-14 10:08:25.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:08:25 localhost nova_compute[295778]: 2025-10-14 10:08:25.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:08:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:30 localhost podman[246584]: time="2025-10-14T10:08:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:08:30 localhost podman[246584]: @ - - [14/Oct/2025:10:08:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:08:30 localhost podman[246584]: @ - - [14/Oct/2025:10:08:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18348 "" "Go-http-client/1.1" Oct 14 06:08:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:33 localhost openstack_network_exporter[248748]: ERROR 10:08:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:08:33 localhost openstack_network_exporter[248748]: ERROR 10:08:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:08:33 localhost openstack_network_exporter[248748]: ERROR 10:08:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:08:33 localhost openstack_network_exporter[248748]: ERROR 10:08:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:08:33 localhost openstack_network_exporter[248748]: Oct 14 06:08:33 localhost openstack_network_exporter[248748]: ERROR 10:08:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:08:33 localhost openstack_network_exporter[248748]: Oct 14 06:08:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:08:34 localhost podman[319713]: 2025-10-14 10:08:34.53583685 +0000 UTC m=+0.076238901 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009) Oct 14 06:08:34 localhost podman[319713]: 2025-10-14 10:08:34.549034688 +0000 UTC m=+0.089436779 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute) Oct 14 06:08:34 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:08:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:08:38 localhost podman[319733]: 2025-10-14 10:08:38.539349276 +0000 UTC m=+0.083162054 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:08:38 localhost podman[319733]: 2025-10-14 10:08:38.573237969 +0000 UTC m=+0.117050727 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Oct 14 06:08:38 localhost podman[319734]: 2025-10-14 10:08:38.58916114 +0000 UTC m=+0.129598629 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:08:38 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:08:38 localhost podman[319734]: 2025-10-14 10:08:38.602175762 +0000 UTC m=+0.142613271 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:08:38 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:08:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:08:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:08:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:08:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:08:45 localhost podman[319773]: 2025-10-14 10:08:45.512623434 +0000 UTC m=+0.056251154 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:08:45 localhost podman[319773]: 2025-10-14 10:08:45.524998391 +0000 UTC m=+0.068626121 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3) Oct 14 06:08:45 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:08:45 localhost podman[319774]: 2025-10-14 10:08:45.596226259 +0000 UTC m=+0.134002294 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:08:45 localhost podman[319774]: 2025-10-14 10:08:45.614219312 +0000 UTC m=+0.151995367 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009) Oct 14 06:08:45 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:08:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:08:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3983216951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:08:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:08:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3983216951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:08:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:08:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:08:50 localhost podman[319811]: 2025-10-14 10:08:50.550992743 +0000 UTC m=+0.083647496 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 06:08:50 localhost podman[319812]: 2025-10-14 10:08:50.623653259 +0000 UTC m=+0.150828358 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:08:50 localhost podman[319810]: 2025-10-14 10:08:50.655782716 +0000 UTC m=+0.192371923 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:08:50 localhost podman[319810]: 2025-10-14 10:08:50.673154083 +0000 UTC m=+0.209743280 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9) Oct 14 06:08:50 localhost podman[319812]: 2025-10-14 10:08:50.683140747 +0000 UTC m=+0.210315796 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:08:50 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:08:50 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:08:50 localhost podman[319811]: 2025-10-14 10:08:50.708476765 +0000 UTC m=+0.241131548 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible) Oct 14 06:08:50 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:08:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:08:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:08:57.635 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:08:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:08:57.636 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:08:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:08:57.636 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:08:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:08:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:00 localhost podman[246584]: time="2025-10-14T10:09:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:09:00 localhost podman[246584]: @ - - [14/Oct/2025:10:09:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:09:00 localhost podman[246584]: @ - - [14/Oct/2025:10:09:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18355 "" "Go-http-client/1.1" Oct 14 06:09:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:03 localhost openstack_network_exporter[248748]: ERROR 10:09:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:09:03 localhost openstack_network_exporter[248748]: ERROR 10:09:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:09:03 localhost openstack_network_exporter[248748]: ERROR 10:09:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:09:03 localhost openstack_network_exporter[248748]: ERROR 10:09:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:09:03 localhost openstack_network_exporter[248748]: Oct 14 06:09:03 localhost openstack_network_exporter[248748]: ERROR 10:09:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:09:03 localhost openstack_network_exporter[248748]: Oct 14 06:09:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:09:05 localhost podman[319878]: 2025-10-14 10:09:05.542771205 +0000 UTC m=+0.080981676 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:09:05 localhost podman[319878]: 2025-10-14 10:09:05.552859731 +0000 UTC m=+0.091070252 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:09:05 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:09:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:09:09 Oct 14 06:09:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:09:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:09:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['vms', 'manila_metadata', 'backups', 'manila_data', '.mgr', 'images', 'volumes'] Oct 14 06:09:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:09:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:09:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:09:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:09:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:09:09 localhost podman[319897]: 2025-10-14 10:09:09.536901404 +0000 UTC m=+0.077860944 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:09:09 localhost podman[319897]: 2025-10-14 10:09:09.546123957 +0000 UTC m=+0.087083487 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:09:09 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:09:09 localhost podman[319898]: 2025-10-14 10:09:09.599843003 +0000 UTC m=+0.136663733 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:09:09 localhost podman[319898]: 2025-10-14 10:09:09.637192358 +0000 UTC m=+0.174013048 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:09:09 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:09:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:09:16 localhost systemd[1]: tmp-crun.LQeseY.mount: Deactivated successfully. Oct 14 06:09:16 localhost podman[319956]: 2025-10-14 10:09:16.206766344 +0000 UTC m=+0.137669250 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS) Oct 14 06:09:16 localhost podman[319956]: 2025-10-14 10:09:16.222190321 +0000 UTC m=+0.153093157 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid) Oct 14 06:09:16 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:09:16 localhost podman[319958]: 2025-10-14 10:09:16.185288708 +0000 UTC m=+0.113130743 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:09:16 localhost podman[319958]: 2025-10-14 10:09:16.269514378 +0000 UTC m=+0.197356413 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:09:16 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:09:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:09:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:09:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:09:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:09:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:09:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:09:16 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev bc35cd5d-c48b-4d73-b2a1-f78bc37837e8 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:09:16 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev bc35cd5d-c48b-4d73-b2a1-f78bc37837e8 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:09:16 localhost ceph-mgr[300442]: [progress INFO root] Completed event bc35cd5d-c48b-4d73-b2a1-f78bc37837e8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:09:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:09:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:09:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:17 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:09:17 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.935 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.935 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.936 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.937 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:09:17 localhost nova_compute[295778]: 2025-10-14 10:09:17.938 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:09:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:09:18 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1548697877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.477 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.637 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.638 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=12222MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.638 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.638 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.712 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.713 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:09:18 localhost nova_compute[295778]: 2025-10-14 10:09:18.745 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:09:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:19 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:09:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:09:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:09:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:09:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3751649600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:09:19 localhost nova_compute[295778]: 2025-10-14 10:09:19.197 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:09:19 localhost nova_compute[295778]: 2025-10-14 10:09:19.204 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:09:19 localhost nova_compute[295778]: 2025-10-14 10:09:19.222 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:09:19 localhost nova_compute[295778]: 2025-10-14 10:09:19.224 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:09:19 localhost nova_compute[295778]: 2025-10-14 10:09:19.225 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:09:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:09:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:21 localhost nova_compute[295778]: 2025-10-14 10:09:21.225 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:09:21 localhost podman[320108]: 2025-10-14 10:09:21.552561327 +0000 UTC m=+0.088776411 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:09:21 localhost podman[320109]: 2025-10-14 10:09:21.612679102 +0000 UTC m=+0.143817563 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:09:21 localhost podman[320108]: 2025-10-14 10:09:21.622220673 +0000 UTC m=+0.158435747 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Oct 14 06:09:21 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:09:21 localhost podman[320109]: 2025-10-14 10:09:21.65429486 +0000 UTC m=+0.185433361 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:09:21 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:09:21 localhost podman[320110]: 2025-10-14 10:09:21.713752167 +0000 UTC m=+0.241299872 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:09:21 localhost podman[320110]: 2025-10-14 10:09:21.74949912 +0000 UTC m=+0.277046795 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:09:21 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:09:22 localhost nova_compute[295778]: 2025-10-14 10:09:22.899 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:22 localhost nova_compute[295778]: 2025-10-14 10:09:22.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:22 localhost nova_compute[295778]: 2025-10-14 10:09:22.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:22 localhost nova_compute[295778]: 2025-10-14 10:09:22.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:09:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:23 localhost nova_compute[295778]: 2025-10-14 10:09:23.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:24 localhost nova_compute[295778]: 2025-10-14 10:09:24.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:25 localhost nova_compute[295778]: 2025-10-14 10:09:25.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:26 localhost nova_compute[295778]: 2025-10-14 10:09:26.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:09:26 localhost nova_compute[295778]: 2025-10-14 10:09:26.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:09:26 localhost nova_compute[295778]: 2025-10-14 10:09:26.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:09:26 localhost nova_compute[295778]: 2025-10-14 10:09:26.924 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:09:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:30 localhost podman[246584]: time="2025-10-14T10:09:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:09:30 localhost podman[246584]: @ - - [14/Oct/2025:10:09:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142662 "" "Go-http-client/1.1" Oct 14 06:09:30 localhost podman[246584]: @ - - [14/Oct/2025:10:09:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18361 "" "Go-http-client/1.1" Oct 14 06:09:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:33 localhost openstack_network_exporter[248748]: ERROR 10:09:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:09:33 localhost openstack_network_exporter[248748]: ERROR 10:09:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:09:33 localhost openstack_network_exporter[248748]: ERROR 10:09:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:09:33 localhost openstack_network_exporter[248748]: ERROR 10:09:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:09:33 localhost openstack_network_exporter[248748]: Oct 14 06:09:33 localhost openstack_network_exporter[248748]: ERROR 10:09:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:09:33 localhost openstack_network_exporter[248748]: Oct 14 06:09:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:34 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:34.550 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:09:34 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:34.551 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:09:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:09:36 localhost podman[320173]: 2025-10-14 10:09:36.546430207 +0000 UTC m=+0.084595961 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:09:36 localhost podman[320173]: 2025-10-14 10:09:36.560197671 +0000 UTC m=+0.098363395 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm) Oct 14 06:09:36 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:09:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:38.630 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpidjwswr0/privsep.sock']#033[00m Oct 14 06:09:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 14 06:09:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.201 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.095 320198 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.099 320198 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.103 320198 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.103 320198 INFO oslo.privsep.daemon [-] privsep daemon running as pid 320198#033[00m Oct 14 06:09:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:39.702 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgz_zl52v/privsep.sock']#033[00m Oct 14 06:09:40 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e41: np0005486731.swasqz(active, since 91s), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:09:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:40.372 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:09:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:40.259 320207 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:09:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:40.264 320207 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:09:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:40.268 320207 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 14 06:09:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:40.268 320207 INFO oslo.privsep.daemon [-] privsep daemon running as pid 320207#033[00m Oct 14 06:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:09:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:40.554 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:09:40 localhost systemd[1]: tmp-crun.QEoJHY.mount: Deactivated successfully. Oct 14 06:09:40 localhost podman[320212]: 2025-10-14 10:09:40.589329114 +0000 UTC m=+0.126118320 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:09:40 localhost podman[320211]: 2025-10-14 10:09:40.600914531 +0000 UTC m=+0.139063103 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:09:40 localhost podman[320211]: 2025-10-14 10:09:40.610145715 +0000 UTC m=+0.148294287 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 06:09:40 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:09:40 localhost podman[320212]: 2025-10-14 10:09:40.652973489 +0000 UTC m=+0.189762705 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:09:40 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:09:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 3.7 KiB/s rd, 767 B/s wr, 5 op/s Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.291 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmplpn54caa/privsep.sock']#033[00m Oct 14 06:09:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e84 do_prune osdmap full prune enabled Oct 14 06:09:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e85 e85: 6 total, 6 up, 6 in Oct 14 06:09:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.910 270389 INFO oslo.privsep.daemon [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.795 320261 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.800 320261 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.804 320261 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 14 06:09:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:41.804 320261 INFO oslo.privsep.daemon [-] privsep daemon running as pid 320261#033[00m Oct 14 06:09:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 4.4 KiB/s rd, 921 B/s wr, 6 op/s Oct 14 06:09:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:43.343 270389 INFO neutron.agent.linux.ip_lib [None req-49391cbf-33a0-4134-8ab6-76e2b63de190 - - - - - -] Device tap7d19eda9-50 cannot be used as it has no MAC address#033[00m Oct 14 06:09:43 localhost kernel: device tap7d19eda9-50 entered promiscuous mode Oct 14 06:09:43 localhost NetworkManager[5972]: [1760436583.4232] manager: (tap7d19eda9-50): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Oct 14 06:09:43 localhost ovn_controller[156286]: 2025-10-14T10:09:43Z|00025|binding|INFO|Claiming lport 7d19eda9-50d3-40f9-90eb-ff6972a15572 for this chassis. Oct 14 06:09:43 localhost ovn_controller[156286]: 2025-10-14T10:09:43Z|00026|binding|INFO|7d19eda9-50d3-40f9-90eb-ff6972a15572: Claiming unknown Oct 14 06:09:43 localhost systemd-udevd[320277]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:09:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e85 do_prune osdmap full prune enabled Oct 14 06:09:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.448 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-965dcdc6-f85b-4165-be8d-f2bef4d49440', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-965dcdc6-f85b-4165-be8d-f2bef4d49440', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ee4b9b3991b43f0864ae295145e40de', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90cc5343-fb85-4315-8319-f3d721c785dc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7d19eda9-50d3-40f9-90eb-ff6972a15572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:09:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.450 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 7d19eda9-50d3-40f9-90eb-ff6972a15572 in datapath 965dcdc6-f85b-4165-be8d-f2bef4d49440 bound to our chassis#033[00m Oct 14 06:09:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.453 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 88dedc6c-a38c-4a9c-a826-fdb99f1a0f0f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:09:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.453 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 965dcdc6-f85b-4165-be8d-f2bef4d49440, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:09:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.454 161932 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpjkaursnq/privsep.sock']#033[00m Oct 14 06:09:43 localhost ovn_controller[156286]: 2025-10-14T10:09:43Z|00027|binding|INFO|Setting lport 7d19eda9-50d3-40f9-90eb-ff6972a15572 ovn-installed in OVS Oct 14 06:09:43 localhost ovn_controller[156286]: 2025-10-14T10:09:43Z|00028|binding|INFO|Setting lport 7d19eda9-50d3-40f9-90eb-ff6972a15572 up in Southbound Oct 14 06:09:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 e86: 6 total, 6 up, 6 in Oct 14 06:09:43 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e86: 6 total, 6 up, 6 in Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.029 161932 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.031 161932 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpjkaursnq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.931 320313 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.936 320313 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.940 320313 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:43.940 320313 INFO oslo.privsep.daemon [-] privsep daemon running as pid 320313#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.034 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[89263007-ed17-46e6-934d-38267470b259]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:09:44 localhost podman[320342]: Oct 14 06:09:44 localhost podman[320342]: 2025-10-14 10:09:44.365172611 +0000 UTC m=+0.083024669 container create 56cb213c1dfba3bb6cfc0fb4d40b9c926e0dd0fd6cdd340e5d640249f811f20b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-965dcdc6-f85b-4165-be8d-f2bef4d49440, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:09:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:44 localhost systemd[1]: Started libpod-conmon-56cb213c1dfba3bb6cfc0fb4d40b9c926e0dd0fd6cdd340e5d640249f811f20b.scope. Oct 14 06:09:44 localhost podman[320342]: 2025-10-14 10:09:44.326191949 +0000 UTC m=+0.044044057 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:09:44 localhost systemd[1]: Started libcrun container. Oct 14 06:09:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ff5ea20cfc736d50d56cd62dd51bdbbe18fdd3747a56406ca7492d27056933/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:09:44 localhost podman[320342]: 2025-10-14 10:09:44.441197424 +0000 UTC m=+0.159049482 container init 56cb213c1dfba3bb6cfc0fb4d40b9c926e0dd0fd6cdd340e5d640249f811f20b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-965dcdc6-f85b-4165-be8d-f2bef4d49440, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:09:44 localhost dnsmasq[320358]: started, version 2.85 cachesize 150 Oct 14 06:09:44 localhost dnsmasq[320358]: DNS service limited to local subnets Oct 14 06:09:44 localhost dnsmasq[320358]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:09:44 localhost dnsmasq[320358]: warning: no upstream servers configured Oct 14 06:09:44 localhost dnsmasq-dhcp[320358]: DHCP, static leases only on 192.168.199.0, lease time 1d Oct 14 06:09:44 localhost dnsmasq[320358]: read /var/lib/neutron/dhcp/965dcdc6-f85b-4165-be8d-f2bef4d49440/addn_hosts - 0 addresses Oct 14 06:09:44 localhost dnsmasq-dhcp[320358]: read /var/lib/neutron/dhcp/965dcdc6-f85b-4165-be8d-f2bef4d49440/host Oct 14 06:09:44 localhost dnsmasq-dhcp[320358]: read /var/lib/neutron/dhcp/965dcdc6-f85b-4165-be8d-f2bef4d49440/opts Oct 14 06:09:44 localhost podman[320342]: 2025-10-14 10:09:44.46108756 +0000 UTC m=+0.178939618 container start 56cb213c1dfba3bb6cfc0fb4d40b9c926e0dd0fd6cdd340e5d640249f811f20b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-965dcdc6-f85b-4165-be8d-f2bef4d49440, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.470 320313 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.471 320313 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.471 320313 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:09:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:44.567 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[2833358c-4092-4054-8636-066cb0137a13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:09:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:09:44.622 270389 INFO neutron.agent.dhcp.agent [None req-2ff861bc-ce0d-4cab-8e19-12d4a7590da8 - - - - - -] DHCP configuration for ports {'8f9a5cc6-0208-4ece-8faa-cdf232904a0d'} is completed#033[00m Oct 14 06:09:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Oct 14 06:09:45 localhost systemd[1]: tmp-crun.UbQpWR.mount: Deactivated successfully. Oct 14 06:09:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:09:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:09:46 localhost podman[320361]: 2025-10-14 10:09:46.549258735 +0000 UTC m=+0.084732554 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:09:46 localhost podman[320361]: 2025-10-14 10:09:46.583293126 +0000 UTC m=+0.118766925 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:09:46 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:09:46 localhost podman[320362]: 2025-10-14 10:09:46.603878761 +0000 UTC m=+0.135534389 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:09:46 localhost podman[320362]: 2025-10-14 10:09:46.61819508 +0000 UTC m=+0.149850748 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:09:46 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:09:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Oct 14 06:09:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 5.1 MiB/s wr, 39 op/s Oct 14 06:09:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 4.2 MiB/s wr, 32 op/s Oct 14 06:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:09:52 localhost podman[320401]: 2025-10-14 10:09:52.550096799 +0000 UTC m=+0.085593005 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 06:09:52 localhost podman[320402]: 2025-10-14 10:09:52.529213837 +0000 UTC m=+0.065725301 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:09:52 localhost podman[320401]: 2025-10-14 10:09:52.586792371 +0000 UTC m=+0.122288607 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller) Oct 14 06:09:52 localhost podman[320400]: 2025-10-14 10:09:52.595645816 +0000 UTC m=+0.133523036 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public) Oct 14 06:09:52 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:09:52 localhost podman[320400]: 2025-10-14 10:09:52.608119966 +0000 UTC m=+0.145997176 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, version=9.6, maintainer=Red Hat, Inc.) Oct 14 06:09:52 localhost podman[320402]: 2025-10-14 10:09:52.61807762 +0000 UTC m=+0.154589004 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:09:52 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:09:52 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:09:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 4.1 MiB/s wr, 31 op/s Oct 14 06:09:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:09:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 3.5 MiB/s wr, 27 op/s Oct 14 06:09:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:57.636 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:09:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:57.636 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:09:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:09:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:09:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:09:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:00 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 14 06:10:00 localhost ceph-mon[307093]: overall HEALTH_OK Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.446958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600447012, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2247, "num_deletes": 255, "total_data_size": 3844332, "memory_usage": 3983312, "flush_reason": "Manual Compaction"} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600468365, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3712266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21530, "largest_seqno": 23776, "table_properties": {"data_size": 3703076, "index_size": 5695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20778, "raw_average_key_size": 21, "raw_value_size": 3683929, "raw_average_value_size": 3790, "num_data_blocks": 244, "num_entries": 972, "num_filter_entries": 972, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436434, "oldest_key_time": 1760436434, "file_creation_time": 1760436600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 21511 microseconds, and 9426 cpu microseconds. Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.468469) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3712266 bytes OK Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.468500) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.470631) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.470653) EVENT_LOG_v1 {"time_micros": 1760436600470646, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.470678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3834804, prev total WAL file size 3834804, number of live WAL files 2. Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.472068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3625KB)], [36(17MB)] Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600472117, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 22539302, "oldest_snapshot_seqno": -1} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12475 keys, 19266007 bytes, temperature: kUnknown Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600589964, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 19266007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19194408, "index_size": 39306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31237, "raw_key_size": 333975, "raw_average_key_size": 26, "raw_value_size": 18981581, "raw_average_value_size": 1521, "num_data_blocks": 1501, "num_entries": 12475, "num_filter_entries": 12475, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.590275) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 19266007 bytes Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.592376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.1 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 18.0 +0.0 blob) out(18.4 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 13015, records dropped: 540 output_compression: NoCompression Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.592432) EVENT_LOG_v1 {"time_micros": 1760436600592405, "job": 20, "event": "compaction_finished", "compaction_time_micros": 117949, "compaction_time_cpu_micros": 51260, "output_level": 6, "num_output_files": 1, "total_output_size": 19266007, "num_input_records": 13015, "num_output_records": 12475, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600593271, "job": 20, "event": "table_file_deletion", "file_number": 38} Oct 14 06:10:00 localhost podman[246584]: time="2025-10-14T10:10:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436600596304, "job": 20, "event": "table_file_deletion", "file_number": 36} Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.471964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.596447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.596453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.596456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.596459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:00.596462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:00 localhost podman[246584]: @ - - [14/Oct/2025:10:10:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:10:00 localhost podman[246584]: @ - - [14/Oct/2025:10:10:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18817 "" "Go-http-client/1.1" Oct 14 06:10:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:03 localhost openstack_network_exporter[248748]: ERROR 10:10:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:10:03 localhost openstack_network_exporter[248748]: ERROR 10:10:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:10:03 localhost openstack_network_exporter[248748]: ERROR 10:10:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:10:03 localhost openstack_network_exporter[248748]: ERROR 10:10:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:10:03 localhost openstack_network_exporter[248748]: Oct 14 06:10:03 localhost openstack_network_exporter[248748]: ERROR 10:10:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:10:03 localhost openstack_network_exporter[248748]: Oct 14 06:10:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:10:07 localhost podman[320470]: 2025-10-14 10:10:07.556373125 +0000 UTC m=+0.084212880 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, tcib_managed=true) Oct 14 06:10:07 localhost podman[320470]: 2025-10-14 10:10:07.597219807 +0000 UTC m=+0.125059552 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:10:07 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:10:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:10:09 Oct 14 06:10:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:10:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:10:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'images', '.mgr', 'vms', 'manila_data', 'backups'] Oct 14 06:10:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:10:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:10:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:10:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:10:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:10:11 localhost podman[320488]: 2025-10-14 10:10:11.524361418 +0000 UTC m=+0.064453047 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:10:11 localhost podman[320488]: 2025-10-14 10:10:11.53911134 +0000 UTC m=+0.079202989 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:10:11 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:10:11 localhost systemd[1]: tmp-crun.CPikv2.mount: Deactivated successfully. Oct 14 06:10:11 localhost podman[320487]: 2025-10-14 10:10:11.597807394 +0000 UTC m=+0.138454087 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 06:10:11 localhost podman[320487]: 2025-10-14 10:10:11.603963807 +0000 UTC m=+0.144610480 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:10:11 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:10:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:13 localhost ovn_controller[156286]: 2025-10-14T10:10:13Z|00029|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Oct 14 06:10:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:10:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:10:17 localhost systemd[1]: tmp-crun.Id2sFg.mount: Deactivated successfully. Oct 14 06:10:17 localhost podman[320546]: 2025-10-14 10:10:17.428809653 +0000 UTC m=+0.096105495 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd) Oct 14 06:10:17 localhost podman[320545]: 2025-10-14 10:10:17.467367214 +0000 UTC m=+0.139775112 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible) Oct 14 06:10:17 localhost podman[320545]: 2025-10-14 10:10:17.507160907 +0000 UTC m=+0.179568785 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:10:17 localhost podman[320546]: 2025-10-14 10:10:17.5193613 +0000 UTC m=+0.186657112 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:10:17 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:10:17 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:10:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:18.152 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:10:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:18.153 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:10:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:10:18 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:10:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:10:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:10:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:10:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:10:18 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev b7865bb2-9f95-440c-8456-1419939cce17 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:10:18 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev b7865bb2-9f95-440c-8456-1419939cce17 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:10:18 localhost ceph-mgr[300442]: [progress INFO root] Completed event b7865bb2-9f95-440c-8456-1419939cce17 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:10:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:10:18 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:10:18 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:10:18 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.923 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.923 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:10:18 localhost nova_compute[295778]: 2025-10-14 10:10:18.923 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:10:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:19 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:10:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:10:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:10:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:10:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/655893482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.343 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:10:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.454770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619454814, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 250, "total_data_size": 228709, "memory_usage": 238968, "flush_reason": "Manual Compaction"} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619460500, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 225050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23777, "largest_seqno": 24221, "table_properties": {"data_size": 222628, "index_size": 533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5441, "raw_average_key_size": 16, "raw_value_size": 217667, "raw_average_value_size": 667, "num_data_blocks": 24, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436601, "oldest_key_time": 1760436601, "file_creation_time": 1760436619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5791 microseconds, and 2111 cpu microseconds. Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.460559) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 225050 bytes OK Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.460586) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.463288) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.463317) EVENT_LOG_v1 {"time_micros": 1760436619463308, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.463340) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 226016, prev total WAL file size 226016, number of live WAL files 2. Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.464421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353139' seq:72057594037927935, type:22 .. '6B760031373730' seq:0, type:0; will stop at (end) Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(219KB)], [39(18MB)] Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619464467, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 19491057, "oldest_snapshot_seqno": -1} Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.548 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.550 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11860MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12285 keys, 18451126 bytes, temperature: kUnknown Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.550 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619551007, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18451126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18381866, "index_size": 37479, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 331468, "raw_average_key_size": 26, "raw_value_size": 18173264, "raw_average_value_size": 1479, "num_data_blocks": 1407, "num_entries": 12285, "num_filter_entries": 12285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.551 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.551333) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18451126 bytes Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.575573) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.0 rd, 213.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 18.4 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(168.6) write-amplify(82.0) OK, records in: 12801, records dropped: 516 output_compression: NoCompression Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.575605) EVENT_LOG_v1 {"time_micros": 1760436619575591, "job": 22, "event": "compaction_finished", "compaction_time_micros": 86638, "compaction_time_cpu_micros": 47511, "output_level": 6, "num_output_files": 1, "total_output_size": 18451126, "num_input_records": 12801, "num_output_records": 12285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619576036, "job": 22, "event": "table_file_deletion", "file_number": 41} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436619578773, "job": 22, "event": "table_file_deletion", "file_number": 39} Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.464302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.578956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.578964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.578968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.578970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:10:19.578973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.683 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.684 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:10:19 localhost nova_compute[295778]: 2025-10-14 10:10:19.700 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:10:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:10:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1109003935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:10:20 localhost nova_compute[295778]: 2025-10-14 10:10:20.142 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:10:20 localhost nova_compute[295778]: 2025-10-14 10:10:20.149 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:10:20 localhost nova_compute[295778]: 2025-10-14 10:10:20.177 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:10:20 localhost nova_compute[295778]: 2025-10-14 10:10:20.180 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:10:20 localhost nova_compute[295778]: 2025-10-14 10:10:20.180 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:10:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:10:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:22 localhost nova_compute[295778]: 2025-10-14 10:10:22.182 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:23.156 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:10:23 localhost podman[320697]: 2025-10-14 10:10:23.548592807 +0000 UTC m=+0.081435877 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:10:23 localhost podman[320697]: 2025-10-14 10:10:23.561111508 +0000 UTC m=+0.093954628 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:10:23 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:10:23 localhost podman[320696]: 2025-10-14 10:10:23.656308478 +0000 UTC m=+0.193731110 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:10:23 localhost podman[320695]: 2025-10-14 10:10:23.701072393 +0000 UTC m=+0.238645149 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64) Oct 14 06:10:23 localhost podman[320695]: 2025-10-14 10:10:23.711459128 +0000 UTC m=+0.249031924 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350) Oct 14 06:10:23 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:10:23 localhost podman[320696]: 2025-10-14 10:10:23.764882783 +0000 UTC m=+0.302305455 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller) Oct 14 06:10:23 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:10:23 localhost nova_compute[295778]: 2025-10-14 10:10:23.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:24 localhost nova_compute[295778]: 2025-10-14 10:10:24.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:24 localhost nova_compute[295778]: 2025-10-14 10:10:24.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:24 localhost nova_compute[295778]: 2025-10-14 10:10:24.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:24 localhost nova_compute[295778]: 2025-10-14 10:10:24.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:24 localhost nova_compute[295778]: 2025-10-14 10:10:24.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:10:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:26 localhost nova_compute[295778]: 2025-10-14 10:10:26.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:26 localhost nova_compute[295778]: 2025-10-14 10:10:26.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:10:26 localhost nova_compute[295778]: 2025-10-14 10:10:26.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:10:26 localhost nova_compute[295778]: 2025-10-14 10:10:26.924 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:10:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:27 localhost nova_compute[295778]: 2025-10-14 10:10:27.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:28 localhost nova_compute[295778]: 2025-10-14 10:10:28.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:10:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:29.474 270389 INFO neutron.agent.linux.ip_lib [None req-24e9b199-d353-49e4-9fd7-cd0696ef424a - - - - - -] Device tap48199f38-fd cannot be used as it has no MAC address#033[00m Oct 14 06:10:29 localhost kernel: device tap48199f38-fd entered promiscuous mode Oct 14 06:10:29 localhost NetworkManager[5972]: [1760436629.5089] manager: (tap48199f38-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Oct 14 06:10:29 localhost ovn_controller[156286]: 2025-10-14T10:10:29Z|00030|binding|INFO|Claiming lport 48199f38-fd12-4dec-9835-6635c7e5c5a7 for this chassis. Oct 14 06:10:29 localhost ovn_controller[156286]: 2025-10-14T10:10:29Z|00031|binding|INFO|48199f38-fd12-4dec-9835-6635c7e5c5a7: Claiming unknown Oct 14 06:10:29 localhost systemd-udevd[320775]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:10:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:29.523 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adbcad8c-50ba-42d0-91a9-e7edd5a551da, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48199f38-fd12-4dec-9835-6635c7e5c5a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:10:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:29.525 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 48199f38-fd12-4dec-9835-6635c7e5c5a7 in datapath b031757f-f610-486e-b256-d0edeb3a8180 bound to our chassis#033[00m Oct 14 06:10:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:29.528 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b031757f-f610-486e-b256-d0edeb3a8180 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:10:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:29.531 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b88102a8-7b6c-4d91-8f52-e3c87d573340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:10:29 localhost journal[236030]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 14 06:10:29 localhost journal[236030]: hostname: np0005486731.localdomain Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost ovn_controller[156286]: 2025-10-14T10:10:29Z|00032|binding|INFO|Setting lport 48199f38-fd12-4dec-9835-6635c7e5c5a7 ovn-installed in OVS Oct 14 06:10:29 localhost ovn_controller[156286]: 2025-10-14T10:10:29Z|00033|binding|INFO|Setting lport 48199f38-fd12-4dec-9835-6635c7e5c5a7 up in Southbound Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:29 localhost journal[236030]: ethtool ioctl error on tap48199f38-fd: No such device Oct 14 06:10:30 localhost podman[320846]: Oct 14 06:10:30 localhost podman[320846]: 2025-10-14 10:10:30.471698178 +0000 UTC m=+0.108085093 container create 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:10:30 localhost systemd[1]: Started libpod-conmon-9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58.scope. Oct 14 06:10:30 localhost podman[320846]: 2025-10-14 10:10:30.414878954 +0000 UTC m=+0.051265909 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:10:30 localhost systemd[1]: Started libcrun container. Oct 14 06:10:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247d19e5e405a2c3e489ac20d85c13b38e92abef18e4fcf8019011d07fea3338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:10:30 localhost podman[320846]: 2025-10-14 10:10:30.543196512 +0000 UTC m=+0.179583437 container init 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:10:30 localhost podman[320846]: 2025-10-14 10:10:30.549792486 +0000 UTC m=+0.186179401 container start 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0) Oct 14 06:10:30 localhost dnsmasq[320864]: started, version 2.85 cachesize 150 Oct 14 06:10:30 localhost dnsmasq[320864]: DNS service limited to local subnets Oct 14 06:10:30 localhost dnsmasq[320864]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:10:30 localhost dnsmasq[320864]: warning: no upstream servers configured Oct 14 06:10:30 localhost dnsmasq-dhcp[320864]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:10:30 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 0 addresses Oct 14 06:10:30 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:10:30 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:10:30 localhost podman[246584]: time="2025-10-14T10:10:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:10:30 localhost podman[246584]: @ - - [14/Oct/2025:10:10:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146312 "" "Go-http-client/1.1" Oct 14 06:10:30 localhost podman[246584]: @ - - [14/Oct/2025:10:10:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19303 "" "Go-http-client/1.1" Oct 14 06:10:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:30.705 270389 INFO neutron.agent.dhcp.agent [None req-40cf3545-1bed-47b6-bf35-b7acc86f5c9c - - - - - -] DHCP configuration for ports {'6f2773ed-54b3-461c-b14d-86e7f9734f2b'} is completed#033[00m Oct 14 06:10:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:33 localhost openstack_network_exporter[248748]: ERROR 10:10:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:10:33 localhost openstack_network_exporter[248748]: ERROR 10:10:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:10:33 localhost openstack_network_exporter[248748]: ERROR 10:10:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:10:33 localhost openstack_network_exporter[248748]: ERROR 10:10:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:10:33 localhost openstack_network_exporter[248748]: Oct 14 06:10:33 localhost openstack_network_exporter[248748]: ERROR 10:10:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:10:33 localhost openstack_network_exporter[248748]: Oct 14 06:10:33 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:33.582 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:33Z, description=, device_id=556acacf-a623-4c83-8f30-47e4c7fdd166, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ffc72a31-1e64-48e2-9632-abc8d07e4c0c, ip_allocation=immediate, mac_address=fa:16:3e:7d:c6:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:27Z, description=, dns_domain=, id=b031757f-f610-486e-b256-d0edeb3a8180, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1705330756-network, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26063, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=188, status=ACTIVE, subnets=['09a83b95-cc50-485d-b420-df1feb237be7'], tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:28Z, vlan_transparent=None, network_id=b031757f-f610-486e-b256-d0edeb3a8180, port_security_enabled=False, project_id=4a912863089b4050b50010417538a2b4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=219, status=DOWN, tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:33Z on network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:10:33 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 1 addresses Oct 14 06:10:33 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:10:33 localhost podman[320882]: 2025-10-14 10:10:33.847881904 +0000 UTC m=+0.063958574 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:10:33 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:10:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:34.076 270389 INFO neutron.agent.dhcp.agent [None req-49c1c05d-3548-484a-8486-908c39795f53 - - - - - -] DHCP configuration for ports {'ffc72a31-1e64-48e2-9632-abc8d07e4c0c'} is completed#033[00m Oct 14 06:10:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:34.954 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:33Z, description=, device_id=556acacf-a623-4c83-8f30-47e4c7fdd166, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ffc72a31-1e64-48e2-9632-abc8d07e4c0c, ip_allocation=immediate, mac_address=fa:16:3e:7d:c6:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:27Z, description=, dns_domain=, id=b031757f-f610-486e-b256-d0edeb3a8180, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1705330756-network, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26063, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=188, status=ACTIVE, subnets=['09a83b95-cc50-485d-b420-df1feb237be7'], tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:28Z, vlan_transparent=None, network_id=b031757f-f610-486e-b256-d0edeb3a8180, port_security_enabled=False, project_id=4a912863089b4050b50010417538a2b4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=219, status=DOWN, tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:33Z on network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:10:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:35 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 1 addresses Oct 14 06:10:35 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:10:35 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:10:35 localhost podman[320918]: 2025-10-14 10:10:35.194405245 +0000 UTC m=+0.066229165 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:10:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:35.459 270389 INFO neutron.agent.dhcp.agent [None req-eab3d500-d241-451d-ad92-5721eb93a71e - - - - - -] DHCP configuration for ports {'ffc72a31-1e64-48e2-9632-abc8d07e4c0c'} is completed#033[00m Oct 14 06:10:36 localhost ovn_controller[156286]: 2025-10-14T10:10:36Z|00034|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:10:36 localhost ovn_controller[156286]: 2025-10-14T10:10:36Z|00035|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:10:36 localhost ovn_controller[156286]: 2025-10-14T10:10:36Z|00036|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:10:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:10:38 localhost podman[320941]: 2025-10-14 10:10:38.545756152 +0000 UTC m=+0.087944109 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:10:38 localhost podman[320941]: 2025-10-14 10:10:38.557250407 +0000 UTC m=+0.099438374 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:10:38 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:10:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:10:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:10:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:10:42 localhost podman[320961]: 2025-10-14 10:10:42.544029367 +0000 UTC m=+0.084769484 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible) Oct 14 06:10:42 localhost podman[320962]: 2025-10-14 10:10:42.593695112 +0000 UTC m=+0.130196696 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:10:42 localhost podman[320962]: 2025-10-14 10:10:42.601747395 +0000 UTC m=+0.138248969 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:10:42 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:10:42 localhost podman[320961]: 2025-10-14 10:10:42.624644391 +0000 UTC m=+0.165384518 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent) Oct 14 06:10:42 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:10:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:43.020 270389 INFO neutron.agent.linux.ip_lib [None req-78d8fd5c-dc0d-4cad-b608-5e2df09be929 - - - - - -] Device tap282e238e-dd cannot be used as it has no MAC address#033[00m Oct 14 06:10:43 localhost kernel: device tap282e238e-dd entered promiscuous mode Oct 14 06:10:43 localhost NetworkManager[5972]: [1760436643.0521] manager: (tap282e238e-dd): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Oct 14 06:10:43 localhost ovn_controller[156286]: 2025-10-14T10:10:43Z|00037|binding|INFO|Claiming lport 282e238e-dd4a-4ab2-b9f4-b7da821184de for this chassis. Oct 14 06:10:43 localhost ovn_controller[156286]: 2025-10-14T10:10:43Z|00038|binding|INFO|282e238e-dd4a-4ab2-b9f4-b7da821184de: Claiming unknown Oct 14 06:10:43 localhost systemd-udevd[321013]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost ovn_controller[156286]: 2025-10-14T10:10:43Z|00039|binding|INFO|Setting lport 282e238e-dd4a-4ab2-b9f4-b7da821184de ovn-installed in OVS Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost journal[236030]: ethtool ioctl error on tap282e238e-dd: No such device Oct 14 06:10:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:43.263 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ba133567-4ba1-4d96-820a-7959b7dc36a2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba133567-4ba1-4d96-820a-7959b7dc36a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85e3913d136b45ffb773eb96325628dd', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7024b04b-2440-4a06-b6d2-b00d9850a0f2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=282e238e-dd4a-4ab2-b9f4-b7da821184de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:10:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:43.264 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 282e238e-dd4a-4ab2-b9f4-b7da821184de in datapath ba133567-4ba1-4d96-820a-7959b7dc36a2 bound to our chassis#033[00m Oct 14 06:10:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:43.268 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port f69de47f-cd57-443a-8bda-c569a57df19d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:10:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:43.268 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba133567-4ba1-4d96-820a-7959b7dc36a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:10:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:43.269 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[de9c1478-4e83-405b-8b32-e4e672387451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:10:43 localhost ovn_controller[156286]: 2025-10-14T10:10:43Z|00040|binding|INFO|Setting lport 282e238e-dd4a-4ab2-b9f4-b7da821184de up in Southbound Oct 14 06:10:44 localhost podman[321084]: Oct 14 06:10:44 localhost podman[321084]: 2025-10-14 10:10:44.097902056 +0000 UTC m=+0.095720935 container create afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:10:44 localhost podman[321084]: 2025-10-14 10:10:44.05005181 +0000 UTC m=+0.047870709 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:10:44 localhost systemd[1]: Started libpod-conmon-afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d.scope. Oct 14 06:10:44 localhost systemd[1]: Started libcrun container. Oct 14 06:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b022c8eb6125a41a006f39be92961a1e2880d38e62a27b62b58c103edf5fe65/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:10:44 localhost podman[321084]: 2025-10-14 10:10:44.189085221 +0000 UTC m=+0.186904080 container init afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:10:44 localhost podman[321084]: 2025-10-14 10:10:44.197340749 +0000 UTC m=+0.195159608 container start afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:10:44 localhost dnsmasq[321102]: started, version 2.85 cachesize 150 Oct 14 06:10:44 localhost dnsmasq[321102]: DNS service limited to local subnets Oct 14 06:10:44 localhost dnsmasq[321102]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:10:44 localhost dnsmasq[321102]: warning: no upstream servers configured Oct 14 06:10:44 localhost dnsmasq-dhcp[321102]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:10:44 localhost dnsmasq[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/addn_hosts - 0 addresses Oct 14 06:10:44 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/host Oct 14 06:10:44 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/opts Oct 14 06:10:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:44.379 270389 INFO neutron.agent.dhcp.agent [None req-ece118e8-bd8a-4ac5-b225-eed5860dab2e - - - - - -] DHCP configuration for ports {'8c68973f-4b4f-48f1-b961-e6becf4854dd'} is completed#033[00m Oct 14 06:10:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:44.991 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:44Z, description=, device_id=ce8920b2-dd67-4ff0-bb92-8d428de8525a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ad2a6f9a-27f0-4942-b232-1d36285536b3, ip_allocation=immediate, mac_address=fa:16:3e:00:2d:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:40Z, description=, dns_domain=, id=ba133567-4ba1-4d96-820a-7959b7dc36a2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1958494335-network, port_security_enabled=True, project_id=85e3913d136b45ffb773eb96325628dd, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43470, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=253, status=ACTIVE, subnets=['f8f49ad1-4e40-4759-8acc-fba95e2cff54'], tags=[], tenant_id=85e3913d136b45ffb773eb96325628dd, updated_at=2025-10-14T10:10:41Z, vlan_transparent=None, network_id=ba133567-4ba1-4d96-820a-7959b7dc36a2, port_security_enabled=False, project_id=85e3913d136b45ffb773eb96325628dd, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=282, status=DOWN, tags=[], tenant_id=85e3913d136b45ffb773eb96325628dd, updated_at=2025-10-14T10:10:44Z on network ba133567-4ba1-4d96-820a-7959b7dc36a2#033[00m Oct 14 06:10:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:45 localhost dnsmasq[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/addn_hosts - 1 addresses Oct 14 06:10:45 localhost podman[321120]: 2025-10-14 10:10:45.373796746 +0000 UTC m=+0.058609622 container kill afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:10:45 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/host Oct 14 06:10:45 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/opts Oct 14 06:10:45 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:45.639 270389 INFO neutron.agent.dhcp.agent [None req-1c45142f-9790-4c7f-8ca9-00a1f0aa87fe - - - - - -] DHCP configuration for ports {'ad2a6f9a-27f0-4942-b232-1d36285536b3'} is completed#033[00m Oct 14 06:10:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:46.200 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:44Z, description=, device_id=ce8920b2-dd67-4ff0-bb92-8d428de8525a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ad2a6f9a-27f0-4942-b232-1d36285536b3, ip_allocation=immediate, mac_address=fa:16:3e:00:2d:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:40Z, description=, dns_domain=, id=ba133567-4ba1-4d96-820a-7959b7dc36a2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1958494335-network, port_security_enabled=True, project_id=85e3913d136b45ffb773eb96325628dd, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43470, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=253, status=ACTIVE, subnets=['f8f49ad1-4e40-4759-8acc-fba95e2cff54'], tags=[], tenant_id=85e3913d136b45ffb773eb96325628dd, updated_at=2025-10-14T10:10:41Z, vlan_transparent=None, network_id=ba133567-4ba1-4d96-820a-7959b7dc36a2, port_security_enabled=False, project_id=85e3913d136b45ffb773eb96325628dd, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=282, status=DOWN, tags=[], tenant_id=85e3913d136b45ffb773eb96325628dd, updated_at=2025-10-14T10:10:44Z on network ba133567-4ba1-4d96-820a-7959b7dc36a2#033[00m Oct 14 06:10:46 localhost systemd[1]: tmp-crun.MrKb7Y.mount: Deactivated successfully. Oct 14 06:10:46 localhost dnsmasq[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/addn_hosts - 1 addresses Oct 14 06:10:46 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/host Oct 14 06:10:46 localhost podman[321159]: 2025-10-14 10:10:46.514591729 +0000 UTC m=+0.061680064 container kill afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:10:46 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/opts Oct 14 06:10:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:46.801 270389 INFO neutron.agent.dhcp.agent [None req-0505ea59-b9b2-4fc1-b9f8-7e145e91de5e - - - - - -] DHCP configuration for ports {'ad2a6f9a-27f0-4942-b232-1d36285536b3'} is completed#033[00m Oct 14 06:10:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:47 localhost neutron_sriov_agent[263389]: 2025-10-14 10:10:47.600 2 INFO neutron.agent.securitygroups_rpc [None req-a54e3f06-187e-448e-927e-770f804e5356 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Security group member updated ['08e02d40-7eb0-493a-bf38-79869188d51f']#033[00m Oct 14 06:10:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:10:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:10:48 localhost systemd[1]: tmp-crun.Ca3o1S.mount: Deactivated successfully. Oct 14 06:10:48 localhost podman[321180]: 2025-10-14 10:10:48.558927813 +0000 UTC m=+0.093598069 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 14 06:10:48 localhost podman[321181]: 2025-10-14 10:10:48.602447775 +0000 UTC m=+0.136528246 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:10:48 localhost podman[321181]: 2025-10-14 10:10:48.611976618 +0000 UTC m=+0.146057069 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:10:48 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:10:48 localhost podman[321180]: 2025-10-14 10:10:48.668219887 +0000 UTC m=+0.202890163 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 14 06:10:48 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:10:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:10:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:10:50 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:50.280 270389 INFO neutron.agent.linux.ip_lib [None req-c56174b7-1eb4-4149-95b4-0c2d981ce592 - - - - - -] Device tap1990655e-34 cannot be used as it has no MAC address#033[00m Oct 14 06:10:50 localhost kernel: device tap1990655e-34 entered promiscuous mode Oct 14 06:10:50 localhost NetworkManager[5972]: [1760436650.3429] manager: (tap1990655e-34): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Oct 14 06:10:50 localhost ovn_controller[156286]: 2025-10-14T10:10:50Z|00041|binding|INFO|Claiming lport 1990655e-3485-4339-810b-3bca12b6d76b for this chassis. Oct 14 06:10:50 localhost ovn_controller[156286]: 2025-10-14T10:10:50Z|00042|binding|INFO|1990655e-3485-4339-810b-3bca12b6d76b: Claiming unknown Oct 14 06:10:50 localhost systemd-udevd[321230]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:10:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:50.352 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bde85ee0-511c-4612-bae5-13cb9e42823c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1990655e-3485-4339-810b-3bca12b6d76b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:10:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:50.354 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 1990655e-3485-4339-810b-3bca12b6d76b in datapath 326e2535-2661-4046-aab8-cd9fa2cc08f1 bound to our chassis#033[00m Oct 14 06:10:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:50.356 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 326e2535-2661-4046-aab8-cd9fa2cc08f1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:10:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:50.357 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b8dcb187-e001-4383-9550-ef3c089c7899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost ovn_controller[156286]: 2025-10-14T10:10:50Z|00043|binding|INFO|Setting lport 1990655e-3485-4339-810b-3bca12b6d76b ovn-installed in OVS Oct 14 06:10:50 localhost ovn_controller[156286]: 2025-10-14T10:10:50Z|00044|binding|INFO|Setting lport 1990655e-3485-4339-810b-3bca12b6d76b up in Southbound Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost journal[236030]: ethtool ioctl error on tap1990655e-34: No such device Oct 14 06:10:50 localhost neutron_sriov_agent[263389]: 2025-10-14 10:10:50.750 2 INFO neutron.agent.securitygroups_rpc [None req-824c6cec-0c8a-4d5b-950c-077e64945e6c d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Security group member updated ['f4a71cc4-401e-4fd9-a76d-664285c1f988']#033[00m Oct 14 06:10:50 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:50.823 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:50Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae, ip_allocation=immediate, mac_address=fa:16:3e:4a:4f:8c, name=tempest-parent-145339109, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:27Z, description=, dns_domain=, id=b031757f-f610-486e-b256-d0edeb3a8180, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1705330756-network, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26063, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=188, status=ACTIVE, subnets=['09a83b95-cc50-485d-b420-df1feb237be7'], tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:28Z, vlan_transparent=None, network_id=b031757f-f610-486e-b256-d0edeb3a8180, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f4a71cc4-401e-4fd9-a76d-664285c1f988'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=4a912863089b4050b50010417538a2b4, updated_at=2025-10-14T10:10:50Z on network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:10:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:51 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 2 addresses Oct 14 06:10:51 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:10:51 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:10:51 localhost podman[321308]: 2025-10-14 10:10:51.120297356 +0000 UTC m=+0.062500966 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:10:51 localhost podman[321328]: Oct 14 06:10:51 localhost podman[321328]: 2025-10-14 10:10:51.205176942 +0000 UTC m=+0.087428085 container create ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:10:51 localhost systemd[1]: Started libpod-conmon-ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05.scope. Oct 14 06:10:51 localhost systemd[1]: Started libcrun container. Oct 14 06:10:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7e8d913a54542bae8268e7c51ab86fa204372ade1110d4da699e8fda260b87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:10:51 localhost podman[321328]: 2025-10-14 10:10:51.16805755 +0000 UTC m=+0.050308783 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:10:51 localhost podman[321328]: 2025-10-14 10:10:51.270266396 +0000 UTC m=+0.152517539 container init ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:10:51 localhost podman[321328]: 2025-10-14 10:10:51.278670948 +0000 UTC m=+0.160922091 container start ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:10:51 localhost dnsmasq[321355]: started, version 2.85 cachesize 150 Oct 14 06:10:51 localhost dnsmasq[321355]: DNS service limited to local subnets Oct 14 06:10:51 localhost dnsmasq[321355]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:10:51 localhost dnsmasq[321355]: warning: no upstream servers configured Oct 14 06:10:51 localhost dnsmasq-dhcp[321355]: DHCP, static leases only on 19.80.0.0, lease time 1d Oct 14 06:10:51 localhost dnsmasq[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/addn_hosts - 0 addresses Oct 14 06:10:51 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/host Oct 14 06:10:51 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/opts Oct 14 06:10:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:51.396 270389 INFO neutron.agent.dhcp.agent [None req-740b55bb-2337-40a7-99d9-5552726ed9c6 - - - - - -] DHCP configuration for ports {'b622d7fd-00d0-4a03-83ea-2c26ab2e6fae'} is completed#033[00m Oct 14 06:10:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:10:51.538 2 INFO neutron.agent.securitygroups_rpc [None req-1fdb7ab6-e039-42d0-a00d-20226c0980d9 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Security group member updated ['08e02d40-7eb0-493a-bf38-79869188d51f']#033[00m Oct 14 06:10:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:51.550 270389 INFO neutron.agent.dhcp.agent [None req-dd1761a1-30d5-4113-9a72-a5dd6ff689a7 - - - - - -] DHCP configuration for ports {'3ef68f41-ea34-4162-bd93-4700131d939b'} is completed#033[00m Oct 14 06:10:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:51.572 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2ce3b76c-371e-4f12-9045-22b8830b61bc, ip_allocation=immediate, mac_address=fa:16:3e:f1:5c:16, name=tempest-subport-459853245, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:10:47Z, description=, dns_domain=, id=326e2535-2661-4046-aab8-cd9fa2cc08f1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1598247986, port_security_enabled=True, project_id=d6e7f435b24646ecaa54e485b818329f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15604, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=314, status=ACTIVE, subnets=['d72a946e-703f-4f63-a352-83a686c66592'], tags=[], tenant_id=d6e7f435b24646ecaa54e485b818329f, updated_at=2025-10-14T10:10:48Z, vlan_transparent=None, network_id=326e2535-2661-4046-aab8-cd9fa2cc08f1, port_security_enabled=True, project_id=d6e7f435b24646ecaa54e485b818329f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['08e02d40-7eb0-493a-bf38-79869188d51f'], standard_attr_id=325, status=DOWN, tags=[], tenant_id=d6e7f435b24646ecaa54e485b818329f, updated_at=2025-10-14T10:10:51Z on network 326e2535-2661-4046-aab8-cd9fa2cc08f1#033[00m Oct 14 06:10:51 localhost dnsmasq[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/addn_hosts - 1 addresses Oct 14 06:10:51 localhost podman[321373]: 2025-10-14 10:10:51.767800088 +0000 UTC m=+0.057493122 container kill ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:10:51 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/host Oct 14 06:10:51 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/opts Oct 14 06:10:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:10:52.053 270389 INFO neutron.agent.dhcp.agent [None req-5a146452-9c34-4ed4-9097-3dcf3a2be59c - - - - - -] DHCP configuration for ports {'2ce3b76c-371e-4f12-9045-22b8830b61bc'} is completed#033[00m Oct 14 06:10:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:10:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:54 localhost podman[321394]: 2025-10-14 10:10:54.544298827 +0000 UTC m=+0.083404459 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 06:10:54 localhost podman[321394]: 2025-10-14 10:10:54.562186142 +0000 UTC m=+0.101291744 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Oct 14 06:10:54 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:10:54 localhost systemd[1]: tmp-crun.igtVPk.mount: Deactivated successfully. Oct 14 06:10:54 localhost podman[321395]: 2025-10-14 10:10:54.651250359 +0000 UTC m=+0.186904819 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:10:54 localhost podman[321396]: 2025-10-14 10:10:54.726412319 +0000 UTC m=+0.259205353 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:10:54 localhost podman[321396]: 2025-10-14 10:10:54.738271423 +0000 UTC m=+0.271064447 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:10:54 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:10:54 localhost podman[321395]: 2025-10-14 10:10:54.759166206 +0000 UTC m=+0.294820626 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:10:54 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:10:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:10:54.846 2 INFO neutron.agent.securitygroups_rpc [None req-913b626a-cd3c-47c0-b3fd-4256ea7d0f27 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Security group member updated ['f4a71cc4-401e-4fd9-a76d-664285c1f988']#033[00m Oct 14 06:10:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:10:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:10:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:10:57.637 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:10:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:10:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.882 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.882 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.892 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.892 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.906 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 14 06:10:59 localhost nova_compute[295778]: 2025-10-14 10:10:59.911 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.022 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.022 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.028 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.028 2 INFO nova.compute.claims [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Claim successful on node np0005486731.localdomain#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.041 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.415 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.566 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.567 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.588 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 14 06:11:00 localhost podman[246584]: time="2025-10-14T10:11:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:11:00 localhost podman[246584]: @ - - [14/Oct/2025:10:11:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149958 "" "Go-http-client/1.1" Oct 14 06:11:00 localhost podman[246584]: @ - - [14/Oct/2025:10:11:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20260 "" "Go-http-client/1.1" Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.679 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1228322245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.873 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.879 2 DEBUG nova.compute.provider_tree [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.900 2 DEBUG nova.scheduler.client.report [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.925 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.925 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.929 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.934 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 14 06:11:00 localhost nova_compute[295778]: 2025-10-14 10:11:00.935 2 INFO nova.compute.claims [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Claim successful on node np0005486731.localdomain#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.004 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.004 2 DEBUG nova.network.neutron [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.044 2 INFO nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.068 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 14 06:11:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.165 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.191 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.193 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.194 2 INFO nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Creating image(s)#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.241 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.279 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.318 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.322 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.324 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:01 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4235355092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.624 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.631 2 DEBUG nova.compute.provider_tree [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.657 2 DEBUG nova.scheduler.client.report [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.683 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.684 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.688 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.693 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.694 2 INFO nova.compute.claims [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Claim successful on node np0005486731.localdomain#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.771 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.786 2 INFO nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.816 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.894 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.911 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.914 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.914 2 INFO nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating image(s)#033[00m Oct 14 06:11:01 localhost nova_compute[295778]: 2025-10-14 10:11:01.982 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.021 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.065 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.070 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:02 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2873687829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.325 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.331 2 DEBUG nova.compute.provider_tree [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.345 2 DEBUG nova.scheduler.client.report [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.368 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.369 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.454 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.455 2 DEBUG nova.network.neutron [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.471 2 INFO nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.493 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.647 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.649 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.649 2 INFO nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Creating image(s)#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.685 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.725 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.764 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.769 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:02 localhost nova_compute[295778]: 2025-10-14 10:11:02.790 2 DEBUG nova.virt.libvirt.imagebackend [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Image locations are: [{'url': 'rbd://fcadf6e2-9176-5818-a8d0-37b19acf8eaf/images/4d7273e1-0c4b-46b6-bdfa-9a43be3f063a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fcadf6e2-9176-5818-a8d0-37b19acf8eaf/images/4d7273e1-0c4b-46b6-bdfa-9a43be3f063a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Oct 14 06:11:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 145 MiB data, 710 MiB used, 41 GiB / 42 GiB avail Oct 14 06:11:03 localhost openstack_network_exporter[248748]: ERROR 10:11:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:11:03 localhost openstack_network_exporter[248748]: ERROR 10:11:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:11:03 localhost openstack_network_exporter[248748]: ERROR 10:11:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:11:03 localhost openstack_network_exporter[248748]: ERROR 10:11:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:11:03 localhost openstack_network_exporter[248748]: Oct 14 06:11:03 localhost openstack_network_exporter[248748]: ERROR 10:11:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:11:03 localhost openstack_network_exporter[248748]: Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.641 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.676 2 WARNING oslo_policy.policy [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.677 2 WARNING oslo_policy.policy [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.682 2 DEBUG nova.policy [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a2c72478a7c4747a73158cd8119b6ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6e7f435b24646ecaa54e485b818329f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.708 2 DEBUG nova.policy [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd6d06f9c969f4b25a388e6b1f8e79df2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a912863089b4050b50010417538a2b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.712 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.part --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.713 2 DEBUG nova.virt.images [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] 4d7273e1-0c4b-46b6-bdfa-9a43be3f063a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.715 2 DEBUG nova.privsep.utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.715 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.part /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.895 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.part /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.converted" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.899 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.970 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5.converted --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:03 localhost nova_compute[295778]: 2025-10-14 10:11:03.971 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 2.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.006 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.010 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.029 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 1.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.030 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.065 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.069 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.091 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 1.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.092 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.132 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.137 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 daabd3b0-5555-49e7-a72f-51f6e096611a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:04.334 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005486731.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:50Z, description=, device_id=daabd3b0-5555-49e7-a72f-51f6e096611a, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-livemigrationtest-server-138942356, extra_dhcp_opts=[], fixed_ips=[], id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae, ip_allocation=immediate, mac_address=fa:16:3e:4a:4f:8c, name=tempest-parent-145339109, network_id=b031757f-f610-486e-b256-d0edeb3a8180, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['f4a71cc4-401e-4fd9-a76d-664285c1f988'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=4a912863089b4050b50010417538a2b4, trunk_details=sub_ports=[], trunk_id=7953f0af-3e00-4aa5-8261-15e5663a4a9c, updated_at=2025-10-14T10:11:03Z on network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:11:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:04 localhost systemd[1]: tmp-crun.OXUPzg.mount: Deactivated successfully. Oct 14 06:11:04 localhost podman[321834]: 2025-10-14 10:11:04.542068282 +0000 UTC m=+0.058121940 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:11:04 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 2 addresses Oct 14 06:11:04 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:11:04 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:11:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:04.713 270389 INFO neutron.agent.dhcp.agent [None req-215e41ea-1e8a-43b1-857d-542c1959ee59 - - - - - -] DHCP configuration for ports {'b622d7fd-00d0-4a03-83ea-2c26ab2e6fae'} is completed#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.755 2 DEBUG nova.network.neutron [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Successfully updated port: b622d7fd-00d0-4a03-83ea-2c26ab2e6fae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.780 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.780 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquired lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.781 2 DEBUG nova.network.neutron [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.799 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.789s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.876 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.807s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.987 2 DEBUG nova.network.neutron [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:11:04 localhost nova_compute[295778]: 2025-10-14 10:11:04.997 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] resizing rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.038 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 daabd3b0-5555-49e7-a72f-51f6e096611a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.043 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] resizing rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 14 06:11:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 284 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 68 op/s Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.116 2 DEBUG nova.network.neutron [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Successfully updated port: 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.158 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.158 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquired lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.159 2 DEBUG nova.network.neutron [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.228 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] resizing rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.279 2 DEBUG nova.objects.instance [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'migration_context' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.284 2 DEBUG nova.objects.instance [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lazy-loading 'migration_context' on Instance uuid 51c986ce-19c4-46c3-80e9-9367d31f15ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.296 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.297 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Ensure instance console log exists: /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.297 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.298 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.298 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.300 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.302 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.302 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Ensure instance console log exists: /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.302 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.303 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.303 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.307 2 WARNING nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.309 2 DEBUG nova.virt.libvirt.host [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.310 2 DEBUG nova.virt.libvirt.host [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.312 2 DEBUG nova.virt.libvirt.host [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.312 2 DEBUG nova.virt.libvirt.host [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.313 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.313 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T10:09:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3d2e2556-398d-47fa-b582-04a393026796',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.314 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.314 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.315 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.315 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.316 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.316 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.316 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.317 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.317 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.317 2 DEBUG nova.virt.hardware [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.322 2 DEBUG nova.privsep.utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.322 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.417 2 DEBUG nova.objects.instance [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lazy-loading 'migration_context' on Instance uuid daabd3b0-5555-49e7-a72f-51f6e096611a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.435 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.435 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Ensure instance console log exists: /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.436 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.436 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.436 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.525 2 DEBUG nova.network.neutron [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.621 2 DEBUG nova.compute.manager [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-changed-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.622 2 DEBUG nova.compute.manager [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Refreshing instance network info cache due to event network-changed-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.622 2 DEBUG oslo_concurrency.lockutils [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.659 2 DEBUG nova.network.neutron [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating instance_info_cache with network_info: [{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.683 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Releasing lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.684 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Instance network_info: |[{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.688 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Start _get_guest_xml network_info=[{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.693 2 WARNING nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.695 2 DEBUG nova.virt.libvirt.host [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.696 2 DEBUG nova.virt.libvirt.host [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.698 2 DEBUG nova.virt.libvirt.host [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.699 2 DEBUG nova.virt.libvirt.host [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.699 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.700 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T10:09:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3d2e2556-398d-47fa-b582-04a393026796',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.700 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.701 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.701 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.702 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.702 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.703 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.703 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.704 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.704 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.705 2 DEBUG nova.virt.hardware [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.709 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1661721343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.775 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.810 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.815 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.837 2 DEBUG nova.network.neutron [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updating instance_info_cache with network_info: [{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.867 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Releasing lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.868 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Instance network_info: |[{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.868 2 DEBUG oslo_concurrency.lockutils [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.869 2 DEBUG nova.network.neutron [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Refreshing network info cache for port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.875 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Start _get_guest_xml network_info=[{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.880 2 WARNING nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.884 2 DEBUG nova.virt.libvirt.host [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.885 2 DEBUG nova.virt.libvirt.host [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.895 2 DEBUG nova.virt.libvirt.host [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.896 2 DEBUG nova.virt.libvirt.host [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.896 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.896 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T10:09:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3d2e2556-398d-47fa-b582-04a393026796',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.897 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.898 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.898 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.899 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.899 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.899 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.899 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.900 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.900 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.900 2 DEBUG nova.virt.hardware [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.905 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.956 2 DEBUG nova.compute.manager [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-changed-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.956 2 DEBUG nova.compute.manager [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Refreshing instance network info cache due to event network-changed-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.956 2 DEBUG oslo_concurrency.lockutils [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.957 2 DEBUG oslo_concurrency.lockutils [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:05 localhost nova_compute[295778]: 2025-10-14 10:11:05.957 2 DEBUG nova.network.neutron [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Refreshing network info cache for port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:11:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/749376939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.165 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.199 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.204 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1466367723' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.287 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.290 2 DEBUG nova.objects.instance [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.312 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] End _get_guest_xml xml= Oct 14 06:11:06 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:06 localhost nova_compute[295778]: instance-00000007 Oct 14 06:11:06 localhost nova_compute[295778]: 131072 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-server-766913962 Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:05 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: 128 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-643946357-project-member Oct 14 06:11:06 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-643946357 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: RDO Oct 14 06:11:06 localhost nova_compute[295778]: OpenStack Compute Oct 14 06:11:06 localhost nova_compute[295778]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 14 06:11:06 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:06 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:06 localhost nova_compute[295778]: Virtual Machine Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: hvm Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: /dev/urandom Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 14 06:11:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1347055289' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.347 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.389 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.395 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.435 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.435 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.437 2 INFO nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Using config drive#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.473 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.544 2 INFO nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating config drive at /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.549 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1xcnw_pa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/52083104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.632 2 DEBUG nova.network.neutron [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updated VIF entry in instance network info cache for port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.633 2 DEBUG nova.network.neutron [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updating instance_info_cache with network_info: [{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.637 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.639 2 DEBUG nova.virt.libvirt.vif [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-138942356',display_name='tempest-LiveMigrationTest-server-138942356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-livemigrationtest-server-138942356',id=8,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a912863089b4050b50010417538a2b4',ramdisk_id='',reservation_id='r-6hil40u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1892895176',owner_user_name='tempest-LiveMigrationTest-1892895176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:11:02Z,user_data=None,user_id='d6d06f9c969f4b25a388e6b1f8e79df2',uuid=daabd3b0-5555-49e7-a72f-51f6e096611a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.639 2 DEBUG nova.network.os_vif_util [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Converting VIF {"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.641 2 DEBUG nova.network.os_vif_util [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.645 2 DEBUG nova.objects.instance [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid daabd3b0-5555-49e7-a72f-51f6e096611a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.662 2 DEBUG oslo_concurrency.lockutils [req-0ab65267-f49e-407d-a553-3c0651581c27 req-1357745c-656d-414f-a8e7-311f84ba065d da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.665 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] End _get_guest_xml xml= Oct 14 06:11:06 localhost nova_compute[295778]: daabd3b0-5555-49e7-a72f-51f6e096611a Oct 14 06:11:06 localhost nova_compute[295778]: instance-00000008 Oct 14 06:11:06 localhost nova_compute[295778]: 131072 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveMigrationTest-server-138942356 Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:05 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: 128 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveMigrationTest-1892895176-project-member Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveMigrationTest-1892895176 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: RDO Oct 14 06:11:06 localhost nova_compute[295778]: OpenStack Compute Oct 14 06:11:06 localhost nova_compute[295778]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 14 06:11:06 localhost nova_compute[295778]: daabd3b0-5555-49e7-a72f-51f6e096611a Oct 14 06:11:06 localhost nova_compute[295778]: daabd3b0-5555-49e7-a72f-51f6e096611a Oct 14 06:11:06 localhost nova_compute[295778]: Virtual Machine Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: hvm Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: /dev/urandom Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.666 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Preparing to wait for external event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.666 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.667 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.667 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.669 2 DEBUG nova.virt.libvirt.vif [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-138942356',display_name='tempest-LiveMigrationTest-server-138942356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-livemigrationtest-server-138942356',id=8,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a912863089b4050b50010417538a2b4',ramdisk_id='',reservation_id='r-6hil40u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1892895176',owner_user_name='tempest-LiveMigrationTest-1892895176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:11:02Z,user_data=None,user_id='d6d06f9c969f4b25a388e6b1f8e79df2',uuid=daabd3b0-5555-49e7-a72f-51f6e096611a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.669 2 DEBUG nova.network.os_vif_util [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Converting VIF {"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.671 2 DEBUG nova.network.os_vif_util [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.672 2 DEBUG os_vif [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.705 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1xcnw_pa" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.734 2 DEBUG nova.storage.rbd_utils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.738 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.768 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.769 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.769 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [POLLOUT] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.796 2 INFO oslo.privsep.daemon [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpprko4lu3/privsep.sock']#033[00m Oct 14 06:11:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/98798850' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.820 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.822 2 DEBUG nova.virt.libvirt.vif [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2110921355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2110921355',id=6,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6e7f435b24646ecaa54e485b818329f',ramdisk_id='',reservation_id='r-ndyvjswp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1148905026',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1148905026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:11:01Z,user_data=None,user_id='4a2c72478a7c4747a73158cd8119b6ba',uuid=51c986ce-19c4-46c3-80e9-9367d31f15ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.822 2 DEBUG nova.network.os_vif_util [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Converting VIF {"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.823 2 DEBUG nova.network.os_vif_util [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.824 2 DEBUG nova.objects.instance [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c986ce-19c4-46c3-80e9-9367d31f15ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.844 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] End _get_guest_xml xml= Oct 14 06:11:06 localhost nova_compute[295778]: 51c986ce-19c4-46c3-80e9-9367d31f15ba Oct 14 06:11:06 localhost nova_compute[295778]: instance-00000006 Oct 14 06:11:06 localhost nova_compute[295778]: 131072 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveAutoBlockMigrationV225Test-server-2110921355 Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:05 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: 128 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 0 Oct 14 06:11:06 localhost nova_compute[295778]: 1 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveAutoBlockMigrationV225Test-1148905026-project-member Oct 14 06:11:06 localhost nova_compute[295778]: tempest-LiveAutoBlockMigrationV225Test-1148905026 Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: RDO Oct 14 06:11:06 localhost nova_compute[295778]: OpenStack Compute Oct 14 06:11:06 localhost nova_compute[295778]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 14 06:11:06 localhost nova_compute[295778]: 51c986ce-19c4-46c3-80e9-9367d31f15ba Oct 14 06:11:06 localhost nova_compute[295778]: 51c986ce-19c4-46c3-80e9-9367d31f15ba Oct 14 06:11:06 localhost nova_compute[295778]: Virtual Machine Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: hvm Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: /dev/urandom Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: Oct 14 06:11:06 localhost nova_compute[295778]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.845 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Preparing to wait for external event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.845 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.846 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.846 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.847 2 DEBUG nova.virt.libvirt.vif [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2110921355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2110921355',id=6,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6e7f435b24646ecaa54e485b818329f',ramdisk_id='',reservation_id='r-ndyvjswp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1148905026',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1148905026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:11:01Z,user_data=None,user_id='4a2c72478a7c4747a73158cd8119b6ba',uuid=51c986ce-19c4-46c3-80e9-9367d31f15ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.847 2 DEBUG nova.network.os_vif_util [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Converting VIF {"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.848 2 DEBUG nova.network.os_vif_util [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.849 2 DEBUG os_vif [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.913 2 DEBUG nova.network.neutron [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updated VIF entry in instance network info cache for port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.914 2 DEBUG nova.network.neutron [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating instance_info_cache with network_info: [{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:06 localhost nova_compute[295778]: 2025-10-14 10:11:06.932 2 DEBUG oslo_concurrency.lockutils [req-5586a8b9-6b1d-4086-a17e-b1b52811f93d req-a1ba5014-5b88-4fea-b323-a601c96e4c5c da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.030 2 DEBUG oslo_concurrency.processutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.031 2 INFO nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deleting local config drive /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config because it was imported into RBD.#033[00m Oct 14 06:11:07 localhost systemd[1]: Starting libvirt secret daemon... Oct 14 06:11:07 localhost systemd[1]: Started libvirt secret daemon. Oct 14 06:11:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 284 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 68 op/s Oct 14 06:11:07 localhost systemd-machined[205044]: New machine qemu-1-instance-00000007. Oct 14 06:11:07 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000007. Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.370 2 INFO oslo.privsep.daemon [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.278 1513 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.283 1513 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.286 1513 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.287 1513 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1513#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.375 2 WARNING oslo_privsep.priv_context [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] privsep daemon already running#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb622d7fd-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb622d7fd-00, col_values=(('external_ids', {'iface-id': 'b622d7fd-00d0-4a03-83ea-2c26ab2e6fae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:4f:8c', 'vm-uuid': 'daabd3b0-5555-49e7-a72f-51f6e096611a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.671 2 INFO os_vif [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00')#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ccffc8d-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ccffc8d-03, col_values=(('external_ids', {'iface-id': '5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:66:a8', 'vm-uuid': '51c986ce-19c4-46c3-80e9-9367d31f15ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.681 2 INFO os_vif [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03')#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.724 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.724 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.724 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] No VIF found with MAC fa:16:3e:4a:4f:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.725 2 INFO nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Using config drive#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.768 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.774 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.775 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Resumed (Lifecycle Event)#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.776 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.777 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.777 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] No VIF found with MAC fa:16:3e:8f:66:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.777 2 INFO nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Using config drive#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.806 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.815 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.816 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.824 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.829 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance spawned successfully.#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.830 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.832 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.872 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.873 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.873 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Started (Lifecycle Event)#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.878 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.878 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.878 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.879 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.879 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.879 2 DEBUG nova.virt.libvirt.driver [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.907 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.909 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.944 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.975 2 INFO nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Took 6.06 seconds to spawn the instance on the hypervisor.#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.976 2 DEBUG nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.984 2 INFO nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Creating config drive at /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config#033[00m Oct 14 06:11:07 localhost nova_compute[295778]: 2025-10-14 10:11:07.987 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo_r2tvr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.033 2 INFO nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Creating config drive at /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.037 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpyc1oo1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.061 2 INFO nova.compute.manager [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Took 8.05 seconds to build instance.#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.082 2 DEBUG oslo_concurrency.lockutils [None req-fe3356cd-bb64-44f8-ad1f-e1c20e524a2a 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 8.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.116 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo_r2tvr" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.146 2 DEBUG nova.storage.rbd_utils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] rbd image daabd3b0-5555-49e7-a72f-51f6e096611a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.151 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config daabd3b0-5555-49e7-a72f-51f6e096611a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.172 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpyc1oo1" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.226 2 DEBUG nova.storage.rbd_utils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] rbd image 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.237 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.460 2 DEBUG oslo_concurrency.processutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config daabd3b0-5555-49e7-a72f-51f6e096611a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.461 2 INFO nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Deleting local config drive /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a/disk.config because it was imported into RBD.#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.495 2 DEBUG oslo_concurrency.processutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.496 2 INFO nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Deleting local config drive /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba/disk.config because it was imported into RBD.#033[00m Oct 14 06:11:08 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Oct 14 06:11:08 localhost kernel: device tapb622d7fd-00 entered promiscuous mode Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.5508] manager: (tapb622d7fd-00): new Tun device (/org/freedesktop/NetworkManager/Devices/17) Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00045|binding|INFO|Claiming lport b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for this chassis. Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00046|binding|INFO|b622d7fd-00d0-4a03-83ea-2c26ab2e6fae: Claiming fa:16:3e:4a:4f:8c 10.100.0.8 Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00047|binding|INFO|Claiming lport 677b0027-4428-47b7-b635-95f53cde1f8c for this chassis. Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00048|binding|INFO|677b0027-4428-47b7-b635-95f53cde1f8c: Claiming fa:16:3e:c7:9e:53 19.80.0.39 Oct 14 06:11:08 localhost systemd-udevd[322393]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.566 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:4f:8c 10.100.0.8'], port_security=['fa:16:3e:4a:4f:8c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-145339109', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'daabd3b0-5555-49e7-a72f-51f6e096611a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-145339109', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4a71cc4-401e-4fd9-a76d-664285c1f988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adbcad8c-50ba-42d0-91a9-e7edd5a551da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.568 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:9e:53 19.80.0.39'], port_security=['fa:16:3e:c7:9e:53 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['b622d7fd-00d0-4a03-83ea-2c26ab2e6fae'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-144727916', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e45db34f-2947-4d1e-954d-d27d42257e3e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-144727916', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4a71cc4-401e-4fd9-a76d-664285c1f988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=4e681d1f-d417-4332-aa34-0b36bc9d8797, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=677b0027-4428-47b7-b635-95f53cde1f8c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.570 161932 INFO neutron.agent.ovn.metadata.agent [-] Port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae in datapath b031757f-f610-486e-b256-d0edeb3a8180 bound to our chassis#033[00m Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.5717] device (tapb622d7fd-00): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.5743] manager: (tap5ccffc8d-03): new Tun device (/org/freedesktop/NetworkManager/Devices/18) Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.5750] device (tapb622d7fd-00): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.574 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port c90d9f1c-2551-49e2-96db-58c80ebed69e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.575 161932 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:11:08 localhost systemd-udevd[322546]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:11:08 localhost kernel: device tap5ccffc8d-03 entered promiscuous mode Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00049|binding|INFO|Claiming lport 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 for this chassis. Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00050|binding|INFO|5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77: Claiming fa:16:3e:8f:66:a8 10.100.0.9 Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00051|binding|INFO|Claiming lport 2ce3b76c-371e-4f12-9045-22b8830b61bc for this chassis. Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00052|binding|INFO|2ce3b76c-371e-4f12-9045-22b8830b61bc: Claiming fa:16:3e:f1:5c:16 19.80.0.152 Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00053|binding|INFO|Setting lport b622d7fd-00d0-4a03-83ea-2c26ab2e6fae ovn-installed in OVS Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.6054] device (tap5ccffc8d-03): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 06:11:08 localhost NetworkManager[5972]: [1760436668.6075] device (tap5ccffc8d-03): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00054|binding|INFO|Setting lport b622d7fd-00d0-4a03-83ea-2c26ab2e6fae up in Southbound Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00055|binding|INFO|Setting lport 677b0027-4428-47b7-b635-95f53cde1f8c up in Southbound Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.609 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:5c:16 19.80.0.152'], port_security=['fa:16:3e:f1:5c:16 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-459853245', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-459853245', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08e02d40-7eb0-493a-bf38-79869188d51f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=bde85ee0-511c-4612-bae5-13cb9e42823c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=2ce3b76c-371e-4f12-9045-22b8830b61bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:08.611 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:66:a8 10.100.0.9'], port_security=['fa:16:3e:8f:66:a8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1667242671', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '51c986ce-19c4-46c3-80e9-9367d31f15ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249801e2-2633-40b6-9890-ff6feb071ac2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1667242671', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08e02d40-7eb0-493a-bf38-79869188d51f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca53c92a-b842-485b-a19e-4e345391dda0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:11:08 localhost systemd-machined[205044]: New machine qemu-2-instance-00000008. Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000008. Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00056|binding|INFO|Setting lport 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 ovn-installed in OVS Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00057|binding|INFO|Setting lport 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 up in Southbound Oct 14 06:11:08 localhost ovn_controller[156286]: 2025-10-14T10:11:08Z|00058|binding|INFO|Setting lport 2ce3b76c-371e-4f12-9045-22b8830b61bc up in Southbound Oct 14 06:11:08 localhost nova_compute[295778]: 2025-10-14 10:11:08.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:08 localhost systemd-machined[205044]: New machine qemu-3-instance-00000006. Oct 14 06:11:08 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000006. Oct 14 06:11:08 localhost podman[322550]: 2025-10-14 10:11:08.729579608 +0000 UTC m=+0.102900916 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:08 localhost podman[322550]: 2025-10-14 10:11:08.768372805 +0000 UTC m=+0.141694033 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS) Oct 14 06:11:08 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:11:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:11:09 Oct 14 06:11:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:11:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:11:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['images', 'manila_metadata', 'volumes', 'backups', '.mgr', 'manila_data', 'vms'] Oct 14 06:11:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.067 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[31354015-8257-4b6c-a478-543d606cbe81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.069 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb031757f-f1 in ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.071 320313 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb031757f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.071 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7362ea0f-6ca0-4582-8669-1e58048768a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.073 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[00c34c0b-dbad-443a-9f83-ae149e000765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.104 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[161fd1bd-00c9-4cc4-8681-0eec00755f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 284 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 68 op/s Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.129 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[90fd9479-f635-4ae2-a9bb-db3c18267db7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.131 161932 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp45ox20ah/privsep.sock']#033[00m Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007775149619625907 of space, bias 1.0, pg target 1.5550299239251812 quantized to 32 (current 32) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.854144129210869 quantized to 32 (current 32) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:11:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019433103015075376 quantized to 16 (current 16) Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:11:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.371 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.373 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] VM Started (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.380 2 DEBUG nova.compute.manager [req-e0720a08-cbea-4ecf-83ca-abc0100321e6 req-eeed892a-50c6-4308-baa6-edcf788993f2 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.380 2 DEBUG oslo_concurrency.lockutils [req-e0720a08-cbea-4ecf-83ca-abc0100321e6 req-eeed892a-50c6-4308-baa6-edcf788993f2 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.381 2 DEBUG oslo_concurrency.lockutils [req-e0720a08-cbea-4ecf-83ca-abc0100321e6 req-eeed892a-50c6-4308-baa6-edcf788993f2 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.381 2 DEBUG oslo_concurrency.lockutils [req-e0720a08-cbea-4ecf-83ca-abc0100321e6 req-eeed892a-50c6-4308-baa6-edcf788993f2 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.382 2 DEBUG nova.compute.manager [req-e0720a08-cbea-4ecf-83ca-abc0100321e6 req-eeed892a-50c6-4308-baa6-edcf788993f2 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Processing event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.383 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.389 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 14 06:11:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.515 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.518 2 INFO nova.virt.libvirt.driver [-] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Instance spawned successfully.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.518 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.520 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.543 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.543 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.544 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.544 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.544 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.545 2 DEBUG nova.virt.libvirt.driver [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.548 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.548 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.548 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] VM Paused (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.580 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.583 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.583 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] VM Resumed (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.606 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.610 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.620 2 INFO nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Took 6.97 seconds to spawn the instance on the hypervisor.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.620 2 DEBUG nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.644 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.644 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.644 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] VM Started (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.685 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.689 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.689 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] VM Paused (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.712 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.714 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.722 2 INFO nova.compute.manager [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Took 9.06 seconds to build instance.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.768 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.770 2 DEBUG oslo_concurrency.lockutils [None req-c2629ebd-83d7-4b3c-ad48-06080b824c66 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 9.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.862 161932 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.865 161932 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp45ox20ah/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.728 322681 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.734 322681 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.738 322681 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.738 322681 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322681#033[00m Oct 14 06:11:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:09.869 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[256fc249-7d42-45b2-9fe3-2a2c607cd71b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.968 2 DEBUG nova.compute.manager [req-67197fb4-bb2f-44b1-ba2e-294ca2bd437e req-7a5ffe10-7e22-4572-b8f3-bdc5f22725d5 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.969 2 DEBUG oslo_concurrency.lockutils [req-67197fb4-bb2f-44b1-ba2e-294ca2bd437e req-7a5ffe10-7e22-4572-b8f3-bdc5f22725d5 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.969 2 DEBUG oslo_concurrency.lockutils [req-67197fb4-bb2f-44b1-ba2e-294ca2bd437e req-7a5ffe10-7e22-4572-b8f3-bdc5f22725d5 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.969 2 DEBUG oslo_concurrency.lockutils [req-67197fb4-bb2f-44b1-ba2e-294ca2bd437e req-7a5ffe10-7e22-4572-b8f3-bdc5f22725d5 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.970 2 DEBUG nova.compute.manager [req-67197fb4-bb2f-44b1-ba2e-294ca2bd437e req-7a5ffe10-7e22-4572-b8f3-bdc5f22725d5 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Processing event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.970 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.974 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.975 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] VM Resumed (Lifecycle Event)#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.979 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.984 2 INFO nova.virt.libvirt.driver [-] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Instance spawned successfully.#033[00m Oct 14 06:11:09 localhost nova_compute[295778]: 2025-10-14 10:11:09.984 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.016 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.021 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.036 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.037 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.037 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.038 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.039 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.040 2 DEBUG nova.virt.libvirt.driver [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.052 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.111 2 INFO nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Took 8.92 seconds to spawn the instance on the hypervisor.#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.112 2 DEBUG nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.120 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.121 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.121 2 INFO nova.compute.manager [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shelving#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.150 2 DEBUG nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.203 2 INFO nova.compute.manager [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Took 10.22 seconds to build instance.#033[00m Oct 14 06:11:10 localhost nova_compute[295778]: 2025-10-14 10:11:10.223 2 DEBUG oslo_concurrency.lockutils [None req-eba9e1b8-1ada-431d-ab56-1b0994d8d854 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 10.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:10.436 322681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:10.436 322681 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:10.436 322681 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 158 op/s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.167 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c1a483-ab10-44d7-b6a9-c4dc63238813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.193 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6e55fc1f-39e3-401c-bf11-45ac6d71c2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost NetworkManager[5972]: [1760436671.1944] manager: (tapb031757f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/19) Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.244 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[11691500-430d-488d-9b02-3e62d89738ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.248 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[673d0999-f8ae-4b39-9c28-e8e97d6f5c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapb031757f-f1: link becomes ready Oct 14 06:11:11 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapb031757f-f0: link becomes ready Oct 14 06:11:11 localhost NetworkManager[5972]: [1760436671.2756] device (tapb031757f-f0): carrier: link connected Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.277 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[11415f71-c40c-4a89-9c9a-0563e92c9d1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.304 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e58c86-3d9d-4e70-bbb2-bfdd9626e038]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb031757f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:28:44:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275837, 'reachable_time': 17283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322706, 'error': None, 'target': 'ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.325 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[42f40e7a-f15b-4db9-a32a-d7bd7e2533b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:44cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1275837, 'tstamp': 1275837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322707, 'error': None, 'target': 'ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.358 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[43361b7a-3ab5-4c5b-9042-b78356deaab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb031757f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:28:44:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275837, 'reachable_time': 17283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322708, 'error': None, 'target': 'ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.407 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6a2e79-bdf1-4953-b250-dbee8879cd77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.489 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab4daca-e2f4-40de-8724-06a74b0e4ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.491 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb031757f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.492 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.492 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb031757f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:11 localhost kernel: device tapb031757f-f0 entered promiscuous mode Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.497 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb031757f-f0, col_values=(('external_ids', {'iface-id': '6f2773ed-54b3-461c-b14d-86e7f9734f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:11 localhost ovn_controller[156286]: 2025-10-14T10:11:11Z|00059|binding|INFO|Releasing lport 6f2773ed-54b3-461c-b14d-86e7f9734f2b from this chassis (sb_readonly=0) Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.506 161932 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b031757f-f610-486e-b256-d0edeb3a8180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b031757f-f610-486e-b256-d0edeb3a8180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.507 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[cb50c4a0-f10d-4f20-8e8e-654d2bb2bf02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.508 161932 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: global Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: log /dev/log local0 debug Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: log-tag haproxy-metadata-proxy-b031757f-f610-486e-b256-d0edeb3a8180 Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: user root Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: group root Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: maxconn 1024 Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: pidfile /var/lib/neutron/external/pids/b031757f-f610-486e-b256-d0edeb3a8180.pid.haproxy Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: daemon Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: defaults Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: log global Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: mode http Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: option httplog Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: option dontlognull Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: option http-server-close Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: option forwardfor Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: retries 3 Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: timeout http-request 30s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: timeout connect 30s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: timeout client 32s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: timeout server 32s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: timeout http-keep-alive 30s Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: listen listener Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: bind 169.254.169.254:80 Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: server metadata /var/lib/neutron/metadata_proxy Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: http-request add-header X-OVN-Network-ID b031757f-f610-486e-b256-d0edeb3a8180 Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 14 06:11:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:11.508 161932 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180', 'env', 'PROCESS_TAG=haproxy-b031757f-f610-486e-b256-d0edeb3a8180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b031757f-f610-486e-b256-d0edeb3a8180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 DEBUG nova.compute.manager [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 DEBUG oslo_concurrency.lockutils [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 DEBUG oslo_concurrency.lockutils [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 DEBUG oslo_concurrency.lockutils [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 DEBUG nova.compute.manager [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] No waiting events found dispatching network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:11 localhost nova_compute[295778]: 2025-10-14 10:11:11.861 2 WARNING nova.compute.manager [req-8c19c568-8616-4a9b-a248-015e4be32d25 req-4ecd2116-c8a9-40c8-aa09-9468600a917f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received unexpected event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for instance with vm_state active and task_state None.#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.049 2 DEBUG nova.compute.manager [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.051 2 DEBUG oslo_concurrency.lockutils [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.052 2 DEBUG oslo_concurrency.lockutils [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.053 2 DEBUG oslo_concurrency.lockutils [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.054 2 DEBUG nova.compute.manager [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] No waiting events found dispatching network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.054 2 WARNING nova.compute.manager [req-6eb16a48-e3ab-4de4-80a6-8865a67bc728 req-24d9a844-c3bb-470a-9ce1-8bd2787fe248 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received unexpected event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 for instance with vm_state active and task_state None.#033[00m Oct 14 06:11:12 localhost podman[322741]: Oct 14 06:11:12 localhost podman[322741]: 2025-10-14 10:11:12.110507399 +0000 UTC m=+0.125881894 container create 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:11:12 localhost podman[322741]: 2025-10-14 10:11:12.049110533 +0000 UTC m=+0.064485078 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 06:11:12 localhost systemd[1]: Started libpod-conmon-4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d.scope. Oct 14 06:11:12 localhost systemd[1]: Started libcrun container. Oct 14 06:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7febd8cf85fac29bb2be531e8fbc6265932f8c1081dc1647d7c2e95f1a0f98dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:11:12 localhost podman[322741]: 2025-10-14 10:11:12.213261949 +0000 UTC m=+0.228636474 container init 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:11:12 localhost podman[322741]: 2025-10-14 10:11:12.223538671 +0000 UTC m=+0.238913186 container start 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:11:12 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [NOTICE] (322759) : New worker (322761) forked Oct 14 06:11:12 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [NOTICE] (322759) : Loading success. Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.309 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 677b0027-4428-47b7-b635-95f53cde1f8c in datapath e45db34f-2947-4d1e-954d-d27d42257e3e unbound from our chassis#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.313 161932 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e45db34f-2947-4d1e-954d-d27d42257e3e#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.325 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[766a4fce-c3d9-4005-a7d7-57d6e756f54b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.326 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape45db34f-21 in ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.328 320313 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape45db34f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.328 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bc4e75-9c00-4b0d-9847-e885948d2071]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.329 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[77a7d5d4-07f6-4e68-b811-86cfd7ef8529]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.359 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c30e51-00bc-4fea-9fd8-93e7258772a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.385 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfb968d-8a02-4540-bd10-554fcc8609fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.430 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae9fc45-9daf-4d4a-9bd4-743b6ef3b722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.436 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[9be35437-d6f5-4017-8284-f8b1440e0dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost NetworkManager[5972]: [1760436672.4375] manager: (tape45db34f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/20) Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.489 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[090b40bf-fd8a-4f77-8647-933d081039a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.493 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[6702491b-d112-47ff-b1d5-273202ceb44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tape45db34f-21: link becomes ready Oct 14 06:11:12 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tape45db34f-20: link becomes ready Oct 14 06:11:12 localhost NetworkManager[5972]: [1760436672.5197] device (tape45db34f-20): carrier: link connected Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.522 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[dd22efc6-2e86-4f71-aa80-4eb17d1687ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.558 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[92d25804-1b8a-433f-9a11-05b865013adc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape45db34f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a0:94:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275962, 'reachable_time': 23837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322781, 'error': None, 'target': 'ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.586 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e25d2605-2f57-47cf-9361-6217f3dbf711]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:948b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1275962, 'tstamp': 1275962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322782, 'error': None, 'target': 'ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.618 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b70de0-b6df-40c8-bce8-fa98554bb78e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape45db34f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a0:94:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275962, 'reachable_time': 23837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322783, 'error': None, 'target': 'ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.668 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c0a62e-258a-495f-b471-1f27e20e5fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.766 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a149398c-1473-43d2-8868-69e4dc6d0626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.768 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape45db34f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.769 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.771 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape45db34f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost kernel: device tape45db34f-20 entered promiscuous mode Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.776 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape45db34f-20, col_values=(('external_ids', {'iface-id': 'eaac0aff-a3e3-4086-98c7-adc34e5a13a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost ovn_controller[156286]: 2025-10-14T10:11:12Z|00060|binding|INFO|Releasing lport eaac0aff-a3e3-4086-98c7-adc34e5a13a7 from this chassis (sb_readonly=0) Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.790 161932 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e45db34f-2947-4d1e-954d-d27d42257e3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e45db34f-2947-4d1e-954d-d27d42257e3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 14 06:11:12 localhost nova_compute[295778]: 2025-10-14 10:11:12.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.792 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c506f1-8bdb-42f1-ac4f-a4e2f634881f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.797 161932 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: global Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: log /dev/log local0 debug Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: log-tag haproxy-metadata-proxy-e45db34f-2947-4d1e-954d-d27d42257e3e Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: user root Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: group root Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: maxconn 1024 Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: pidfile /var/lib/neutron/external/pids/e45db34f-2947-4d1e-954d-d27d42257e3e.pid.haproxy Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: daemon Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: defaults Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: log global Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: mode http Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: option httplog Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: option dontlognull Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: option http-server-close Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: option forwardfor Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: retries 3 Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: timeout http-request 30s Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: timeout connect 30s Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: timeout client 32s Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: timeout server 32s Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: timeout http-keep-alive 30s Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: listen listener Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: bind 169.254.169.254:80 Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: server metadata /var/lib/neutron/metadata_proxy Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: http-request add-header X-OVN-Network-ID e45db34f-2947-4d1e-954d-d27d42257e3e Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 14 06:11:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:12.798 161932 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e', 'env', 'PROCESS_TAG=haproxy-e45db34f-2947-4d1e-954d-d27d42257e3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e45db34f-2947-4d1e-954d-d27d42257e3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 14 06:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:11:13 localhost podman[322796]: 2025-10-14 10:11:13.063656944 +0000 UTC m=+0.101437877 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:11:13 localhost podman[322796]: 2025-10-14 10:11:13.108825009 +0000 UTC m=+0.146605932 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:11:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 158 op/s Oct 14 06:11:13 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:11:13 localhost podman[322795]: 2025-10-14 10:11:13.127303799 +0000 UTC m=+0.156614958 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:11:13 localhost podman[322795]: 2025-10-14 10:11:13.162427408 +0000 UTC m=+0.191738607 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:11:13 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:11:13 localhost podman[322854]: Oct 14 06:11:13 localhost podman[322854]: 2025-10-14 10:11:13.446650854 +0000 UTC m=+0.160263825 container create dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:11:13 localhost podman[322854]: 2025-10-14 10:11:13.365396993 +0000 UTC m=+0.079010014 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 06:11:13 localhost systemd[1]: Started libpod-conmon-dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970.scope. Oct 14 06:11:13 localhost systemd[1]: Started libcrun container. Oct 14 06:11:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20140e5c032c2c5295c9e9dd6d0ca62d2e406a4e33b70969c6991603a0543326/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:11:13 localhost podman[322854]: 2025-10-14 10:11:13.537987871 +0000 UTC m=+0.251600872 container init dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:13 localhost podman[322854]: 2025-10-14 10:11:13.549961359 +0000 UTC m=+0.263574350 container start dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:11:13 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [NOTICE] (322873) : New worker (322875) forked Oct 14 06:11:13 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [NOTICE] (322873) : Loading success. Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.628 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce3b76c-371e-4f12-9045-22b8830b61bc in datapath 326e2535-2661-4046-aab8-cd9fa2cc08f1 unbound from our chassis#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.633 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port af3a05e7-dee4-4ed7-a280-37038ee76db0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.634 161932 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 326e2535-2661-4046-aab8-cd9fa2cc08f1#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.644 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c1528f59-deaf-4e10-9e9a-a7e8def7b03d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.646 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap326e2535-21 in ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.648 320313 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap326e2535-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.648 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[cc62cb68-3a06-45c4-8259-bd1537499706]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.650 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d162c92a-13b5-46b5-91a9-7be0a973cd76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.675 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[43a90f8b-4e55-4e29-91c5-4048f15a6cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.696 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[77e9a3c9-b82b-4cd8-8adf-f72eb2be66e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.725 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[c59fd037-9fad-445d-ae66-e0d3bab210a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost NetworkManager[5972]: [1760436673.7344] manager: (tap326e2535-20): new Veth device (/org/freedesktop/NetworkManager/Devices/21) Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.736 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[040507a4-b468-4b1b-bb1a-9f55903a5b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.777 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[97585bcf-44fe-499b-bcbc-898ba778442f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.781 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[95f669b5-fd59-433a-97d3-1a53ff2b7275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap326e2535-21: link becomes ready Oct 14 06:11:13 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap326e2535-20: link becomes ready Oct 14 06:11:13 localhost NetworkManager[5972]: [1760436673.8110] device (tap326e2535-20): carrier: link connected Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.815 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[32e0cb40-840d-47a9-944e-6419837dcb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.834 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e01e896b-4b8a-4ee0-8a70-8ddd74b4ca91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap326e2535-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:9e:f1:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276091, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322895, 'error': None, 'target': 'ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.850 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[33b95e17-980a-43f6-92a4-9ea64f9ba833]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:f12e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1276091, 'tstamp': 1276091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322896, 'error': None, 'target': 'ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.874 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f829242f-1c22-49a3-b2a0-4bed71900767]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap326e2535-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:9e:f1:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276091, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322897, 'error': None, 'target': 'ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost nova_compute[295778]: 2025-10-14 10:11:13.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:13 localhost nova_compute[295778]: 2025-10-14 10:11:13.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.911 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[03d90e83-3be5-4136-bf96-0c52cecc31f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost nova_compute[295778]: 2025-10-14 10:11:13.923 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.985 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7567f7ff-3926-4797-bb19-ccaab7349439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.988 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap326e2535-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.989 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:13.990 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap326e2535-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:13 localhost nova_compute[295778]: 2025-10-14 10:11:13.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:13 localhost kernel: device tap326e2535-20 entered promiscuous mode Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.005 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap326e2535-20, col_values=(('external_ids', {'iface-id': '3ef68f41-ea34-4162-bd93-4700131d939b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:14 localhost nova_compute[295778]: 2025-10-14 10:11:14.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:14 localhost ovn_controller[156286]: 2025-10-14T10:11:14Z|00061|binding|INFO|Releasing lport 3ef68f41-ea34-4162-bd93-4700131d939b from this chassis (sb_readonly=0) Oct 14 06:11:14 localhost nova_compute[295778]: 2025-10-14 10:11:14.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.012 161932 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/326e2535-2661-4046-aab8-cd9fa2cc08f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/326e2535-2661-4046-aab8-cd9fa2cc08f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.014 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a7da1791-03b8-4102-afe4-5676ad7f3211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost nova_compute[295778]: 2025-10-14 10:11:14.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.018 161932 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: global Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: log /dev/log local0 debug Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: log-tag haproxy-metadata-proxy-326e2535-2661-4046-aab8-cd9fa2cc08f1 Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: user root Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: group root Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: maxconn 1024 Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: pidfile /var/lib/neutron/external/pids/326e2535-2661-4046-aab8-cd9fa2cc08f1.pid.haproxy Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: daemon Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: defaults Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: log global Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: mode http Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: option httplog Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: option dontlognull Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: option http-server-close Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: option forwardfor Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: retries 3 Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: timeout http-request 30s Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: timeout connect 30s Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: timeout client 32s Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: timeout server 32s Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: timeout http-keep-alive 30s Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: listen listener Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: bind 169.254.169.254:80 Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: server metadata /var/lib/neutron/metadata_proxy Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: http-request add-header X-OVN-Network-ID 326e2535-2661-4046-aab8-cd9fa2cc08f1 Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.021 161932 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'env', 'PROCESS_TAG=haproxy-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/326e2535-2661-4046-aab8-cd9fa2cc08f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 14 06:11:14 localhost systemd[1]: tmp-crun.FbQcto.mount: Deactivated successfully. Oct 14 06:11:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:14 localhost podman[322929]: Oct 14 06:11:14 localhost podman[322929]: 2025-10-14 10:11:14.56378729 +0000 UTC m=+0.096113126 container create 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:11:14 localhost systemd[1]: Started libpod-conmon-6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052.scope. Oct 14 06:11:14 localhost systemd[1]: tmp-crun.89soVt.mount: Deactivated successfully. Oct 14 06:11:14 localhost podman[322929]: 2025-10-14 10:11:14.527924981 +0000 UTC m=+0.060250797 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 06:11:14 localhost systemd[1]: Started libcrun container. Oct 14 06:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462b6111b760e10bbcc173eed415d80b280aaf0a96afc1c182b682e20a141c9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:11:14 localhost podman[322929]: 2025-10-14 10:11:14.647060055 +0000 UTC m=+0.179385861 container init 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:11:14 localhost podman[322929]: 2025-10-14 10:11:14.658742324 +0000 UTC m=+0.191068120 container start 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:11:14 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [NOTICE] (322945) : New worker (322947) forked Oct 14 06:11:14 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [NOTICE] (322945) : Loading success. Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.716 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 in datapath 249801e2-2633-40b6-9890-ff6feb071ac2 unbound from our chassis#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.719 161932 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249801e2-2633-40b6-9890-ff6feb071ac2#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.727 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccfb4f6-d0f3-4caf-b329-c0a23b10090c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.728 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap249801e2-21 in ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.730 320313 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap249801e2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.730 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e589c393-cb7c-4176-8fee-4caaf4744716]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.731 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[02c98971-8f7d-4d6b-8ad5-4e1885196ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.752 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[92de1944-de47-491c-ba92-dc78d1668386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.763 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[80eb803a-2dae-4bba-af36-63b1363ebd8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.793 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[0be41860-d21b-4adc-8c18-6479ca4b2f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost NetworkManager[5972]: [1760436674.8028] manager: (tap249801e2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.804 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddff0ee-60a4-4e6b-b4ca-040e5d0186c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.844 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5bb8cc-7fbb-4022-b9e2-e8ffe76eecc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.848 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb4b572-1904-442c-a93b-4bd4972b3dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap249801e2-21: link becomes ready Oct 14 06:11:14 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap249801e2-20: link becomes ready Oct 14 06:11:14 localhost NetworkManager[5972]: [1760436674.8772] device (tap249801e2-20): carrier: link connected Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.883 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[e4536184-8e1a-44d5-b011-22f5607bdaac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.907 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[46c1437d-f481-4cbf-bbad-ff4c90f9c12b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249801e2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:52:2d:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276197, 'reachable_time': 17971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322967, 'error': None, 'target': 'ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.925 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab47c9c-bd6c-4de9-86d3-54f6fe1adf00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:2da8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1276197, 'tstamp': 1276197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322968, 'error': None, 'target': 'ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.945 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1cdface6-73ea-4acc-a332-444e5df4c23c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249801e2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:52:2d:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276197, 'reachable_time': 17971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322969, 'error': None, 'target': 'ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:14.981 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[38dcc304-0cf8-4547-9521-561bea6547d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.044 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1d62dd3b-2f63-40e6-80ac-a654161672b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.048 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249801e2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.048 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.049 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249801e2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:15 localhost ovn_controller[156286]: 2025-10-14T10:11:15Z|00062|memory|INFO|peak resident set size grew 51% in last 2396.9 seconds, from 15004 kB to 22688 kB Oct 14 06:11:15 localhost ovn_controller[156286]: 2025-10-14T10:11:15Z|00063|memory|INFO|idl-cells-OVN_Southbound:9765 idl-cells-Open_vSwitch:1326 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:2 lflow-cache-entries-cache-expr:182 lflow-cache-entries-cache-matches:224 lflow-cache-size-KB:699 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:448 ofctrl_installed_flow_usage-KB:326 ofctrl_sb_flow_ref_usage-KB:168 Oct 14 06:11:15 localhost kernel: device tap249801e2-20 entered promiscuous mode Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.100 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249801e2-20, col_values=(('external_ids', {'iface-id': '8506c604-6459-4957-b50a-6fb71d548b83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:15 localhost ovn_controller[156286]: 2025-10-14T10:11:15Z|00064|binding|INFO|Releasing lport 8506c604-6459-4957-b50a-6fb71d548b83 from this chassis (sb_readonly=0) Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 7.5 MiB/s rd, 5.4 MiB/s wr, 309 op/s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.120 161932 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/249801e2-2633-40b6-9890-ff6feb071ac2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/249801e2-2633-40b6-9890-ff6feb071ac2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.121 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[da511bdd-9c91-4d0e-89a0-d3b734e25cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.122 161932 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: global Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: log /dev/log local0 debug Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: log-tag haproxy-metadata-proxy-249801e2-2633-40b6-9890-ff6feb071ac2 Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: user root Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: group root Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: maxconn 1024 Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: pidfile /var/lib/neutron/external/pids/249801e2-2633-40b6-9890-ff6feb071ac2.pid.haproxy Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: daemon Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: defaults Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: log global Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: mode http Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: option httplog Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: option dontlognull Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: option http-server-close Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: option forwardfor Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: retries 3 Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: timeout http-request 30s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: timeout connect 30s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: timeout client 32s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: timeout server 32s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: timeout http-keep-alive 30s Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: listen listener Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: bind 169.254.169.254:80 Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: server metadata /var/lib/neutron/metadata_proxy Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: http-request add-header X-OVN-Network-ID 249801e2-2633-40b6-9890-ff6feb071ac2 Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 14 06:11:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:15.123 161932 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2', 'env', 'PROCESS_TAG=haproxy-249801e2-2633-40b6-9890-ff6feb071ac2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/249801e2-2633-40b6-9890-ff6feb071ac2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.170 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Check if temp file /var/lib/nova/instances/tmpxkjgntcw exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.171 2 DEBUG nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpxkjgntcw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='51c986ce-19c4-46c3-80e9-9367d31f15ba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.173 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Check if temp file /var/lib/nova/instances/tmpfafahloz exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Oct 14 06:11:15 localhost nova_compute[295778]: 2025-10-14 10:11:15.173 2 DEBUG nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpfafahloz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='daabd3b0-5555-49e7-a72f-51f6e096611a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Oct 14 06:11:15 localhost podman[323002]: Oct 14 06:11:15 localhost podman[323002]: 2025-10-14 10:11:15.519248517 +0000 UTC m=+0.078763237 container create 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:11:15 localhost systemd[1]: Started libpod-conmon-25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506.scope. Oct 14 06:11:15 localhost podman[323002]: 2025-10-14 10:11:15.482563545 +0000 UTC m=+0.042078345 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 06:11:15 localhost systemd[1]: Started libcrun container. Oct 14 06:11:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49aad93b58e7f38a40617c5d1ed3843711a3d9fc83796c1261ea5503f307c2dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:11:15 localhost podman[323002]: 2025-10-14 10:11:15.616610834 +0000 UTC m=+0.176125594 container init 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:11:15 localhost podman[323002]: 2025-10-14 10:11:15.625592861 +0000 UTC m=+0.185107611 container start 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:11:15 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [NOTICE] (323020) : New worker (323022) forked Oct 14 06:11:15 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [NOTICE] (323020) : Loading success. Oct 14 06:11:16 localhost systemd[1]: tmp-crun.KtLUlQ.mount: Deactivated successfully. Oct 14 06:11:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 5.8 MiB/s rd, 38 KiB/s wr, 240 op/s Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.427 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.428 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.444 2 INFO nova.compute.rpcapi [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.445 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:17 localhost nova_compute[295778]: 2025-10-14 10:11:17.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:18.261 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:18.262 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.928 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.928 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.929 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:11:18 localhost nova_compute[295778]: 2025-10-14 10:11:18.929 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 5.8 MiB/s rd, 38 KiB/s wr, 240 op/s Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/617065909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.424 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:11:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:11:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:11:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:11:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:11:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:11:19 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev d1308320-fcbd-4f9a-a8cf-d4d5c1487b43 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:11:19 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev d1308320-fcbd-4f9a-a8cf-d4d5c1487b43 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:11:19 localhost ceph-mgr[300442]: [progress INFO root] Completed event d1308320-fcbd-4f9a-a8cf-d4d5c1487b43 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:11:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:11:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.536 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.537 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.540 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.540 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.545 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.545 2 DEBUG nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:19 localhost systemd[1]: tmp-crun.HBaH8G.mount: Deactivated successfully. Oct 14 06:11:19 localhost podman[323120]: 2025-10-14 10:11:19.561761684 +0000 UTC m=+0.098217673 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:11:19 localhost podman[323120]: 2025-10-14 10:11:19.570022631 +0000 UTC m=+0.106478620 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:11:19 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:11:19 localhost podman[323121]: 2025-10-14 10:11:19.605865241 +0000 UTC m=+0.142211256 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:19 localhost podman[323121]: 2025-10-14 10:11:19.641242108 +0000 UTC m=+0.177588123 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:11:19 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.726 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.727 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11085MB free_disk=41.64888381958008GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.728 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.728 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.805 2 INFO nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updating resource usage from migration 643dadf0-0c56-4494-8d83-ef68f5c1daa6#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.806 2 INFO nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating resource usage from migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.860 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Instance cc1adead-5ea6-42fa-9c12-f4d35462f1a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.861 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Migration 643dadf0-0c56-4494-8d83-ef68f5c1daa6 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.861 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.861 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.862 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=896MB phys_disk=41GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:11:19 localhost nova_compute[295778]: 2025-10-14 10:11:19.958 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.205 2 DEBUG nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Oct 14 06:11:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:11:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:11:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1139465951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.438 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.443 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.491 2 ERROR nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [req-54158f64-e6a1-421d-bdbd-90c145b4ea55] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID ebb6de71-88e5-4477-92fc-f2b9532f7fcd. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-54158f64-e6a1-421d-bdbd-90c145b4ea55"}]}#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.510 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.533 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.534 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.559 2 DEBUG nova.compute.manager [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.560 2 DEBUG oslo_concurrency.lockutils [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.561 2 DEBUG oslo_concurrency.lockutils [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.561 2 DEBUG oslo_concurrency.lockutils [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.562 2 DEBUG nova.compute.manager [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] No waiting events found dispatching network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.562 2 DEBUG nova.compute.manager [req-3798cfd0-8b12-40f6-9541-35027d8961d6 req-56432a20-9f25-4a19-9980-8d429ced4296 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.576 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.613 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:11:20 localhost nova_compute[295778]: 2025-10-14 10:11:20.703 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 5.8 MiB/s rd, 38 KiB/s wr, 241 op/s Oct 14 06:11:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3771388413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.220 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.228 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.410 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updated inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.411 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.412 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.470 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.471 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.472 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.865 2 INFO nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Took 4.44 seconds for pre_live_migration on destination host np0005486732.localdomain.#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.866 2 DEBUG nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.881 2 DEBUG nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpxkjgntcw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='51c986ce-19c4-46c3-80e9-9367d31f15ba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(643dadf0-0c56-4494-8d83-ef68f5c1daa6),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.885 2 DEBUG nova.objects.instance [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lazy-loading 'migration_context' on Instance uuid 51c986ce-19c4-46c3-80e9-9367d31f15ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.886 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.887 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.887 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.901 2 DEBUG nova.virt.libvirt.vif [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T10:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2110921355',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2110921355',id=6,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-14T10:11:10Z,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d6e7f435b24646ecaa54e485b818329f',ramdisk_id='',reservation_id='r-ndyvjswp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1148905026',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1148905026-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-14T10:11:10Z,user_data=None,user_id='4a2c72478a7c4747a73158cd8119b6ba',uuid=51c986ce-19c4-46c3-80e9-9367d31f15ba,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.902 2 DEBUG nova.network.os_vif_util [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Converting VIF {"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.902 2 DEBUG nova.network.os_vif_util [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.903 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updating guest XML with vif config: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: Oct 14 06:11:21 localhost nova_compute[295778]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.904 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.928 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.928 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:21 localhost nova_compute[295778]: 2025-10-14 10:11:21.928 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.391 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.392 2 INFO nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.547 2 INFO nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.603 2 DEBUG nova.compute.manager [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.604 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.605 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.606 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.607 2 DEBUG nova.compute.manager [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] No waiting events found dispatching network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.607 2 WARNING nova.compute.manager [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received unexpected event network-vif-plugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 for instance with vm_state active and task_state migrating.#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.608 2 DEBUG nova.compute.manager [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-changed-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.608 2 DEBUG nova.compute.manager [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Refreshing instance network info cache due to event network-changed-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.609 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.609 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.610 2 DEBUG nova.network.neutron [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Refreshing network info cache for port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:11:22 localhost nova_compute[295778]: 2025-10-14 10:11:22.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.050 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.051 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Oct 14 06:11:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 285 MiB data, 920 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 151 op/s Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.121 2 DEBUG nova.network.neutron [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updated VIF entry in instance network info cache for port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.122 2 DEBUG nova.network.neutron [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Updating instance_info_cache with network_info: [{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005486732.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.150 2 DEBUG oslo_concurrency.lockutils [req-8cee0008-863e-40e1-b83b-02d771622307 req-460ca57e-0091-4832-937b-3c662c881c55 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-51c986ce-19c4-46c3-80e9-9367d31f15ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.556 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.558 2 DEBUG nova.virt.libvirt.migration [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.565 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.566 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] VM Paused (Lifecycle Event)#033[00m Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.600 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.603 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.629 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Oct 14 06:11:23 localhost kernel: device tap5ccffc8d-03 left promiscuous mode Oct 14 06:11:23 localhost NetworkManager[5972]: [1760436683.7558] device (tap5ccffc8d-03): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00065|binding|INFO|Releasing lport 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00066|binding|INFO|Setting lport 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 down in Southbound Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00067|binding|INFO|Releasing lport 2ce3b76c-371e-4f12-9045-22b8830b61bc from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00068|binding|INFO|Setting lport 2ce3b76c-371e-4f12-9045-22b8830b61bc down in Southbound Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00069|binding|INFO|Removing iface tap5ccffc8d-03 ovn-installed in OVS Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00070|binding|INFO|Releasing lport eaac0aff-a3e3-4086-98c7-adc34e5a13a7 from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00071|binding|INFO|Releasing lport 6f2773ed-54b3-461c-b14d-86e7f9734f2b from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00072|binding|INFO|Releasing lport 8506c604-6459-4957-b50a-6fb71d548b83 from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_controller[156286]: 2025-10-14T10:11:23Z|00073|binding|INFO|Releasing lport 3ef68f41-ea34-4162-bd93-4700131d939b from this chassis (sb_readonly=0) Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.791 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:5c:16 19.80.0.152'], port_security=['fa:16:3e:f1:5c:16 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-459853245', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-459853245', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '08e02d40-7eb0-493a-bf38-79869188d51f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=bde85ee0-511c-4612-bae5-13cb9e42823c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=2ce3b76c-371e-4f12-9045-22b8830b61bc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.794 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:66:a8 10.100.0.9'], port_security=['fa:16:3e:8f:66:a8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain,np0005486732.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'bfba0fbc-2817-4ef8-a192-47e9f930e160'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1667242671', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '51c986ce-19c4-46c3-80e9-9367d31f15ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249801e2-2633-40b6-9890-ff6feb071ac2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1667242671', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '08e02d40-7eb0-493a-bf38-79869188d51f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca53c92a-b842-485b-a19e-4e345391dda0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.795 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce3b76c-371e-4f12-9045-22b8830b61bc in datapath 326e2535-2661-4046-aab8-cd9fa2cc08f1 unbound from our chassis#033[00m Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.799 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port af3a05e7-dee4-4ed7-a280-37038ee76db0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.799 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 326e2535-2661-4046-aab8-cd9fa2cc08f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:23 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully. Oct 14 06:11:23 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 13.907s CPU time. Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.801 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3c3ac3-2517-42cf-a31f-3a880d0b495c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:23.802 161932 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1 namespace which is not needed anymore#033[00m Oct 14 06:11:23 localhost systemd-machined[205044]: Machine qemu-3-instance-00000006 terminated. Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:23 localhost journal[235816]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk: No such file or directory Oct 14 06:11:23 localhost journal[235816]: Unable to get XATTR trusted.libvirt.security.ref_dac on 51c986ce-19c4-46c3-80e9-9367d31f15ba_disk: No such file or directory Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.962 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.964 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Oct 14 06:11:23 localhost nova_compute[295778]: 2025-10-14 10:11:23.965 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [NOTICE] (322945) : haproxy version is 2.8.14-c23fe91 Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [NOTICE] (322945) : path to executable is /usr/sbin/haproxy Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [WARNING] (322945) : Exiting Master process... Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [ALERT] (322945) : Current worker (322947) exited with code 143 (Terminated) Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1[322941]: [WARNING] (322945) : All workers exited. Exiting... (0) Oct 14 06:11:24 localhost systemd[1]: libpod-6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052.scope: Deactivated successfully. Oct 14 06:11:24 localhost podman[323249]: 2025-10-14 10:11:24.048611894 +0000 UTC m=+0.108970176 container died 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.062 2 DEBUG nova.virt.libvirt.guest [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '51c986ce-19c4-46c3-80e9-9367d31f15ba' (instance-00000006) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.065 2 INFO nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migration operation has completed#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.067 2 INFO nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] _post_live_migration() is started..#033[00m Oct 14 06:11:24 localhost podman[323249]: 2025-10-14 10:11:24.105083439 +0000 UTC m=+0.165441721 container cleanup 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:24 localhost podman[323268]: 2025-10-14 10:11:24.147017509 +0000 UTC m=+0.070454766 container cleanup 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:11:24 localhost systemd[1]: libpod-conmon-6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052.scope: Deactivated successfully. Oct 14 06:11:24 localhost podman[323282]: 2025-10-14 10:11:24.203260778 +0000 UTC m=+0.073296711 container remove 6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:24 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:11:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.209 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[79d7ea7c-ca2e-45e2-a803-9e6cbbaa64e8]: (4, ('Tue Oct 14 10:11:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1 (6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052)\n6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052\nTue Oct 14 10:11:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1 (6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052)\n6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.212 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6d784a36-0438-4e7d-8bc3-9efbbccac43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.214 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap326e2535-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:24 localhost kernel: device tap326e2535-20 left promiscuous mode Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.232 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[742ad96d-1322-467e-8032-8f6541210e49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.251 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[22a0376d-aadf-46c0-94e9-167397de8044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.253 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad86e61-fe74-4745-9c8f-3e54ca381a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.265 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1094a8df-949e-4206-b728-e59931b115fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276082, 'reachable_time': 25147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323305, 'error': None, 'target': 'ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.275 162035 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-326e2535-2661-4046-aab8-cd9fa2cc08f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.275 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[1e626148-9d09-4eab-8e8f-12211044b4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.277 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 in datapath 249801e2-2633-40b6-9890-ff6feb071ac2 unbound from our chassis#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.282 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 249801e2-2633-40b6-9890-ff6feb071ac2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.284 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[dabc00f3-bca1-4fdc-932c-f6d4562e890e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.285 161932 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2 namespace which is not needed anymore#033[00m Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [NOTICE] (323020) : haproxy version is 2.8.14-c23fe91 Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [NOTICE] (323020) : path to executable is /usr/sbin/haproxy Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [WARNING] (323020) : Exiting Master process... Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [WARNING] (323020) : Exiting Master process... Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [ALERT] (323020) : Current worker (323022) exited with code 143 (Terminated) Oct 14 06:11:24 localhost neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2[323016]: [WARNING] (323020) : All workers exited. Exiting... (0) Oct 14 06:11:24 localhost systemd[1]: libpod-25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506.scope: Deactivated successfully. Oct 14 06:11:24 localhost podman[323323]: 2025-10-14 10:11:24.470956296 +0000 UTC m=+0.074828123 container died 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:24 localhost podman[323323]: 2025-10-14 10:11:24.507612376 +0000 UTC m=+0.111484163 container cleanup 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:11:24 localhost podman[323336]: 2025-10-14 10:11:24.543361333 +0000 UTC m=+0.064520999 container cleanup 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:11:24 localhost systemd[1]: libpod-conmon-25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506.scope: Deactivated successfully. Oct 14 06:11:24 localhost podman[323350]: 2025-10-14 10:11:24.616354455 +0000 UTC m=+0.089894481 container remove 25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.622 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e867aacf-01bc-449b-b001-73ae3137f245]: (4, ('Tue Oct 14 10:11:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2 (25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506)\n25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506\nTue Oct 14 10:11:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2 (25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506)\n25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.625 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[805ac345-982a-410b-831b-3fbc4adb2a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.626 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249801e2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost kernel: device tap249801e2-20 left promiscuous mode Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.649 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[68d6d068-d9a3-4422-8964-06f458910e15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.665 2 DEBUG nova.compute.manager [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.666 2 DEBUG oslo_concurrency.lockutils [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.667 2 DEBUG oslo_concurrency.lockutils [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.668 2 DEBUG oslo_concurrency.lockutils [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.668 2 DEBUG nova.compute.manager [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] No waiting events found dispatching network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.669 2 DEBUG nova.compute.manager [req-5a2d5d19-e243-4104-9cb3-19766de18cfb req-498f760c-bb1e-4fb3-bf82-95267ea96873 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Received event network-vif-unplugged-5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.664 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a0f1f1-f92a-4aaa-9744-7f1130f3ab0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.675 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a705f783-bd1b-47d8-976b-44f3380a31ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.693 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e2749701-8d55-4d98-ba70-ba2659fc727d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1276188, 'reachable_time': 16410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323378, 'error': None, 'target': 'ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.695 162035 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-249801e2-2633-40b6-9890-ff6feb071ac2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 14 06:11:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:24.695 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[95eb99f7-7adb-4869-8030-6c1f09fc8915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:24 localhost podman[323375]: 2025-10-14 10:11:24.763155722 +0000 UTC m=+0.082327351 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:11:24 localhost podman[323375]: 2025-10-14 10:11:24.783201852 +0000 UTC m=+0.102373531 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 06:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:11:24 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:11:24 localhost podman[323397]: 2025-10-14 10:11:24.889455135 +0000 UTC m=+0.088365250 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:11:24 localhost podman[323397]: 2025-10-14 10:11:24.901443873 +0000 UTC m=+0.100353978 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:11:24 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.920 2 DEBUG nova.network.neutron [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Activated binding for port 5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77 and host np0005486732.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.921 2 DEBUG nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.922 2 DEBUG nova.virt.libvirt.vif [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T10:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2110921355',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2110921355',id=6,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-14T10:11:10Z,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d6e7f435b24646ecaa54e485b818329f',ramdisk_id='',reservation_id='r-ndyvjswp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1148905026',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1148905026-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-14T10:11:13Z,user_data=None,user_id='4a2c72478a7c4747a73158cd8119b6ba',uuid=51c986ce-19c4-46c3-80e9-9367d31f15ba,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.922 2 DEBUG nova.network.os_vif_util [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Converting VIF {"id": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "address": "fa:16:3e:8f:66:a8", "network": {"id": "249801e2-2633-40b6-9890-ff6feb071ac2", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1532647513-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d6e7f435b24646ecaa54e485b818329f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ccffc8d-03", "ovs_interfaceid": "5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.923 2 DEBUG nova.network.os_vif_util [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.924 2 DEBUG os_vif [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.929 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ccffc8d-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.932 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.935 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.940 2 INFO os_vif [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:66:a8,bridge_name='br-int',has_traffic_filtering=True,id=5ccffc8d-03e9-40cc-a050-2d5b8d8a4f77,network=Network(249801e2-2633-40b6-9890-ff6feb071ac2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5ccffc8d-03')#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.941 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.942 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.943 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.943 2 DEBUG nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.944 2 INFO nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Deleting instance files /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba_del#033[00m Oct 14 06:11:24 localhost nova_compute[295778]: 2025-10-14 10:11:24.945 2 INFO nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Deletion of /var/lib/nova/instances/51c986ce-19c4-46c3-80e9-9367d31f15ba_del complete#033[00m Oct 14 06:11:25 localhost podman[323398]: 2025-10-14 10:11:25.000764832 +0000 UTC m=+0.189779245 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:11:25 localhost systemd[1]: var-lib-containers-storage-overlay-49aad93b58e7f38a40617c5d1ed3843711a3d9fc83796c1261ea5503f307c2dd-merged.mount: Deactivated successfully. Oct 14 06:11:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25a87237dbf3459b4ae50379eda81a535dae1e2134f2a0fb10945f90f135d506-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:25 localhost systemd[1]: run-netns-ovnmeta\x2d249801e2\x2d2633\x2d40b6\x2d9890\x2dff6feb071ac2.mount: Deactivated successfully. Oct 14 06:11:25 localhost systemd[1]: var-lib-containers-storage-overlay-462b6111b760e10bbcc173eed415d80b280aaf0a96afc1c182b682e20a141c9d-merged.mount: Deactivated successfully. Oct 14 06:11:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a8f770dc9730757c545734e53ebc85a470439064e54bc0d414f29d83d57a052-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:25 localhost systemd[1]: run-netns-ovnmeta\x2d326e2535\x2d2661\x2d4046\x2daab8\x2dcd9fa2cc08f1.mount: Deactivated successfully. Oct 14 06:11:25 localhost podman[323398]: 2025-10-14 10:11:25.045952559 +0000 UTC m=+0.234967012 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Oct 14 06:11:25 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:11:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 6.2 MiB/s wr, 312 op/s Oct 14 06:11:25 localhost nova_compute[295778]: 2025-10-14 10:11:25.907 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:25 localhost nova_compute[295778]: 2025-10-14 10:11:25.907 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:26 localhost ovn_controller[156286]: 2025-10-14T10:11:26Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:4f:8c 10.100.0.8 Oct 14 06:11:26 localhost ovn_controller[156286]: 2025-10-14T10:11:26Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:4f:8c 10.100.0.8 Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.924 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.924 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquired lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.925 2 DEBUG nova.network.neutron [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 14 06:11:26 localhost nova_compute[295778]: 2025-10-14 10:11:26.925 2 DEBUG nova.objects.instance [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.003 2 DEBUG nova.network.neutron [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:11:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 900 KiB/s rd, 6.2 MiB/s wr, 161 op/s Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.247 2 DEBUG nova.network.neutron [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.262 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Releasing lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.263 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.263 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:27 localhost nova_compute[295778]: 2025-10-14 10:11:27.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:28 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:28.264 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.839 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquiring lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.841 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.842 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "51c986ce-19c4-46c3-80e9-9367d31f15ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.871 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.872 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.872 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.873 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:11:28 localhost nova_compute[295778]: 2025-10-14 10:11:28.873 2 DEBUG oslo_concurrency.processutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 900 KiB/s rd, 6.2 MiB/s wr, 161 op/s Oct 14 06:11:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:29 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1668632311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.351 2 DEBUG oslo_concurrency.processutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.430 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.431 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.437 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.438 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 14 06:11:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.699 2 WARNING nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.701 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11125MB free_disk=41.43317413330078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.702 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.703 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.750 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Migration for instance 51c986ce-19c4-46c3-80e9-9367d31f15ba refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.768 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.768 2 INFO nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating resource usage from migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.821 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Instance cc1adead-5ea6-42fa-9c12-f4d35462f1a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.821 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Migration 643dadf0-0c56-4494-8d83-ef68f5c1daa6 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.822 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.823 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.824 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=768MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.928 2 DEBUG oslo_concurrency.processutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:29 localhost nova_compute[295778]: 2025-10-14 10:11:29.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1198680378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.397 2 DEBUG oslo_concurrency.processutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.402 2 DEBUG nova.compute.provider_tree [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.428 2 DEBUG nova.scheduler.client.report [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.455 2 DEBUG nova.compute.resource_tracker [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.455 2 DEBUG oslo_concurrency.lockutils [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.467 2 INFO nova.compute.manager [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Migrating instance to np0005486732.localdomain finished successfully.#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.582 2 INFO nova.scheduler.client.report [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] Deleted allocation for migration 643dadf0-0c56-4494-8d83-ef68f5c1daa6#033[00m Oct 14 06:11:30 localhost nova_compute[295778]: 2025-10-14 10:11:30.582 2 DEBUG nova.virt.libvirt.driver [None req-717e0d97-614e-427d-8e45-92de140f7e5c a671f79d1c2e4cd28c8cf592f2401aed 847541e081c842b2b9b1e6a9c5cd6cf4 - - default default] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Oct 14 06:11:30 localhost podman[246584]: time="2025-10-14T10:11:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:11:30 localhost podman[246584]: @ - - [14/Oct/2025:10:11:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152331 "" "Go-http-client/1.1" Oct 14 06:11:30 localhost podman[246584]: @ - - [14/Oct/2025:10:11:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21239 "" "Go-http-client/1.1" Oct 14 06:11:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 988 KiB/s rd, 6.4 MiB/s wr, 194 op/s Oct 14 06:11:31 localhost nova_compute[295778]: 2025-10-14 10:11:31.264 2 DEBUG nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Oct 14 06:11:32 localhost nova_compute[295778]: 2025-10-14 10:11:32.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 984 KiB/s rd, 6.4 MiB/s wr, 193 op/s Oct 14 06:11:33 localhost openstack_network_exporter[248748]: ERROR 10:11:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:11:33 localhost openstack_network_exporter[248748]: ERROR 10:11:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:11:33 localhost openstack_network_exporter[248748]: ERROR 10:11:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:11:33 localhost openstack_network_exporter[248748]: ERROR 10:11:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:11:33 localhost openstack_network_exporter[248748]: Oct 14 06:11:33 localhost openstack_network_exporter[248748]: ERROR 10:11:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:11:33 localhost openstack_network_exporter[248748]: Oct 14 06:11:33 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully. Oct 14 06:11:33 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 14.851s CPU time. Oct 14 06:11:33 localhost systemd-machined[205044]: Machine qemu-1-instance-00000007 terminated. Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.250 2 INFO nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Took 3.63 seconds for pre_live_migration on destination host np0005486732.localdomain.#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.251 2 DEBUG nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.280 2 INFO nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance shutdown successfully after 24 seconds.#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.286 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.287 2 DEBUG nova.objects.instance [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'numa_topology' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.303 2 DEBUG nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpfafahloz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='daabd3b0-5555-49e7-a72f-51f6e096611a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(73c8560b-0e97-4c62-8543-1cd0ed3ebde3),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.306 2 DEBUG nova.objects.instance [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lazy-loading 'migration_context' on Instance uuid daabd3b0-5555-49e7-a72f-51f6e096611a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.307 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.311 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.311 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.336 2 DEBUG nova.virt.libvirt.vif [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T10:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-138942356',display_name='tempest-LiveMigrationTest-server-138942356',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-livemigrationtest-server-138942356',id=8,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-14T10:11:09Z,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4a912863089b4050b50010417538a2b4',ramdisk_id='',reservation_id='r-6hil40u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1892895176',owner_user_name='tempest-LiveMigrationTest-1892895176-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-14T10:11:09Z,user_data=None,user_id='d6d06f9c969f4b25a388e6b1f8e79df2',uuid=daabd3b0-5555-49e7-a72f-51f6e096611a,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.337 2 DEBUG nova.network.os_vif_util [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Converting VIF {"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.338 2 DEBUG nova.network.os_vif_util [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.338 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating guest XML with vif config: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: Oct 14 06:11:34 localhost nova_compute[295778]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.339 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.375 2 INFO nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Beginning cold snapshot process#033[00m Oct 14 06:11:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.561 2 DEBUG nova.virt.libvirt.imagebackend [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] No parent info for 4d7273e1-0c4b-46b6-bdfa-9a43be3f063a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.601 2 DEBUG nova.storage.rbd_utils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] creating snapshot(d32b24dd4fe74b71b34d72f30850c9e7) on rbd image(cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.814 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.815 2 INFO nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.884 2 INFO nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Oct 14 06:11:34 localhost nova_compute[295778]: 2025-10-14 10:11:34.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.081 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.082 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.083 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.083 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.084 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] No waiting events found dispatching network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.084 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.084 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.085 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.085 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.086 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.086 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] No waiting events found dispatching network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.086 2 WARNING nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received unexpected event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for instance with vm_state active and task_state migrating.#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.087 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-changed-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.087 2 DEBUG nova.compute.manager [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Refreshing instance network info cache due to event network-changed-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.088 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.088 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.088 2 DEBUG nova.network.neutron [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Refreshing network info cache for port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:11:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 990 KiB/s rd, 6.4 MiB/s wr, 202 op/s Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.387 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.388 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Oct 14 06:11:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e86 do_prune osdmap full prune enabled Oct 14 06:11:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e87 e87: 6 total, 6 up, 6 in Oct 14 06:11:35 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e87: 6 total, 6 up, 6 in Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.595 2 DEBUG nova.storage.rbd_utils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] cloning vms/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk@d32b24dd4fe74b71b34d72f30850c9e7 to images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.690 2 DEBUG nova.network.neutron [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updated VIF entry in instance network info cache for port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.692 2 DEBUG nova.network.neutron [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Updating instance_info_cache with network_info: [{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005486732.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.719 2 DEBUG oslo_concurrency.lockutils [req-a0e134d9-3c0f-4411-adf2-018957d73182 req-4de02656-25cb-4392-912a-337c5f266e68 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-daabd3b0-5555-49e7-a72f-51f6e096611a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.797 2 DEBUG nova.storage.rbd_utils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] flattening images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.895 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.896 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] VM Paused (Lifecycle Event)#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.902 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.902 2 DEBUG nova.virt.libvirt.migration [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.931 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.937 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:35 localhost nova_compute[295778]: 2025-10-14 10:11:35.967 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Oct 14 06:11:36 localhost kernel: device tapb622d7fd-00 left promiscuous mode Oct 14 06:11:36 localhost NetworkManager[5972]: [1760436696.0799] device (tapb622d7fd-00): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00074|binding|INFO|Releasing lport b622d7fd-00d0-4a03-83ea-2c26ab2e6fae from this chassis (sb_readonly=0) Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00075|binding|INFO|Setting lport b622d7fd-00d0-4a03-83ea-2c26ab2e6fae down in Southbound Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00076|binding|INFO|Releasing lport 677b0027-4428-47b7-b635-95f53cde1f8c from this chassis (sb_readonly=0) Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00077|binding|INFO|Setting lport 677b0027-4428-47b7-b635-95f53cde1f8c down in Southbound Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00078|binding|INFO|Removing iface tapb622d7fd-00 ovn-installed in OVS Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00079|binding|INFO|Releasing lport eaac0aff-a3e3-4086-98c7-adc34e5a13a7 from this chassis (sb_readonly=0) Oct 14 06:11:36 localhost ovn_controller[156286]: 2025-10-14T10:11:36Z|00080|binding|INFO|Releasing lport 6f2773ed-54b3-461c-b14d-86e7f9734f2b from this chassis (sb_readonly=0) Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.150 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:4f:8c 10.100.0.8'], port_security=['fa:16:3e:4a:4f:8c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain,np0005486732.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'bfba0fbc-2817-4ef8-a192-47e9f930e160'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-145339109', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'daabd3b0-5555-49e7-a72f-51f6e096611a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-145339109', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f4a71cc4-401e-4fd9-a76d-664285c1f988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adbcad8c-50ba-42d0-91a9-e7edd5a551da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.153 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:9e:53 19.80.0.39'], port_security=['fa:16:3e:c7:9e:53 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['b622d7fd-00d0-4a03-83ea-2c26ab2e6fae'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-144727916', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e45db34f-2947-4d1e-954d-d27d42257e3e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-144727916', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'f4a71cc4-401e-4fd9-a76d-664285c1f988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=4e681d1f-d417-4332-aa34-0b36bc9d8797, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=677b0027-4428-47b7-b635-95f53cde1f8c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.154 161932 INFO neutron.agent.ovn.metadata.agent [-] Port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae in datapath b031757f-f610-486e-b256-d0edeb3a8180 unbound from our chassis#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.158 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port c90d9f1c-2551-49e2-96db-58c80ebed69e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.159 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b031757f-f610-486e-b256-d0edeb3a8180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.160 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[51e6cba6-1a6c-45af-b7a9-219455b123f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.161 161932 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180 namespace which is not needed anymore#033[00m Oct 14 06:11:36 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully. Oct 14 06:11:36 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 16.021s CPU time. Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost systemd-machined[205044]: Machine qemu-2-instance-00000008 terminated. Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [NOTICE] (322759) : haproxy version is 2.8.14-c23fe91 Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [NOTICE] (322759) : path to executable is /usr/sbin/haproxy Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [WARNING] (322759) : Exiting Master process... Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [WARNING] (322759) : Exiting Master process... Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [ALERT] (322759) : Current worker (322761) exited with code 143 (Terminated) Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180[322755]: [WARNING] (322759) : All workers exited. Exiting... (0) Oct 14 06:11:36 localhost systemd[1]: libpod-4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d.scope: Deactivated successfully. Oct 14 06:11:36 localhost podman[323621]: 2025-10-14 10:11:36.333169493 +0000 UTC m=+0.060727208 container died 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:11:36 localhost journal[235816]: Unable to get XATTR trusted.libvirt.security.ref_selinux on daabd3b0-5555-49e7-a72f-51f6e096611a_disk: No such file or directory Oct 14 06:11:36 localhost journal[235816]: Unable to get XATTR trusted.libvirt.security.ref_dac on daabd3b0-5555-49e7-a72f-51f6e096611a_disk: No such file or directory Oct 14 06:11:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:36 localhost NetworkManager[5972]: [1760436696.3750] manager: (tapb622d7fd-00): new Tun device (/org/freedesktop/NetworkManager/Devices/23) Oct 14 06:11:36 localhost podman[323621]: 2025-10-14 10:11:36.380470945 +0000 UTC m=+0.108028660 container cleanup 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.402 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.403 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.403 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.407 2 DEBUG nova.virt.libvirt.guest [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'daabd3b0-5555-49e7-a72f-51f6e096611a' (instance-00000008) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.407 2 INFO nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migration operation has completed#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.408 2 INFO nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] _post_live_migration() is started..#033[00m Oct 14 06:11:36 localhost podman[323635]: 2025-10-14 10:11:36.425335793 +0000 UTC m=+0.080908223 container cleanup 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:11:36 localhost systemd[1]: libpod-conmon-4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d.scope: Deactivated successfully. Oct 14 06:11:36 localhost podman[323660]: 2025-10-14 10:11:36.474599238 +0000 UTC m=+0.072165332 container remove 4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.480 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ca97d1-3ae2-465f-867f-d68ac0f29df2]: (4, ('Tue Oct 14 10:11:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180 (4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d)\n4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d\nTue Oct 14 10:11:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180 (4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d)\n4f4a1a48c97db27e06e68fcb04427074760809e5eb620fc3ce37b463fa11d35d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.483 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[519ca695-30e4-4349-a5a5-e8ffb1a1c557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.485 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb031757f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:36 localhost kernel: device tapb031757f-f0 left promiscuous mode Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.506 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae80a2c-09a3-49ec-a4d7-2ae5bb53f0cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.522 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0267dac7-2b5f-4c80-b58f-db23eaf31b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.523 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[2db10443-cb63-4ddf-89b9-745a3abb9996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.540 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[10981590-3e8b-4051-a6c5-b5aabda0294f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275826, 'reachable_time': 41190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323682, 'error': None, 'target': 'ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.542 162035 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b031757f-f610-486e-b256-d0edeb3a8180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.542 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[96348f50-7dcf-4a98-ae59-6dbd09c8a944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.543 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 677b0027-4428-47b7-b635-95f53cde1f8c in datapath e45db34f-2947-4d1e-954d-d27d42257e3e unbound from our chassis#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.549 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e45db34f-2947-4d1e-954d-d27d42257e3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.550 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[af89d46d-d2ed-4d57-8b95-7b554ffb5243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.551 161932 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e namespace which is not needed anymore#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.688 2 DEBUG nova.storage.rbd_utils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] removing snapshot(d32b24dd4fe74b71b34d72f30850c9e7) on rbd image(cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [NOTICE] (322873) : haproxy version is 2.8.14-c23fe91 Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [NOTICE] (322873) : path to executable is /usr/sbin/haproxy Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [ALERT] (322873) : Current worker (322875) exited with code 143 (Terminated) Oct 14 06:11:36 localhost neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e[322869]: [WARNING] (322873) : All workers exited. Exiting... (0) Oct 14 06:11:36 localhost systemd[1]: libpod-dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970.scope: Deactivated successfully. Oct 14 06:11:36 localhost podman[323718]: 2025-10-14 10:11:36.750318127 +0000 UTC m=+0.077284756 container died dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:11:36 localhost podman[323718]: 2025-10-14 10:11:36.798979116 +0000 UTC m=+0.125945675 container cleanup dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:11:36 localhost podman[323730]: 2025-10-14 10:11:36.8146303 +0000 UTC m=+0.060886212 container cleanup dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:11:36 localhost systemd[1]: libpod-conmon-dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970.scope: Deactivated successfully. Oct 14 06:11:36 localhost podman[323747]: 2025-10-14 10:11:36.874590928 +0000 UTC m=+0.056544789 container remove dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.879 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce12318-be59-42da-b23c-f732d00d5db1]: (4, ('Tue Oct 14 10:11:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e (dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970)\ndcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970\nTue Oct 14 10:11:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e (dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970)\ndcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.882 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f33d08-48d8-4b1e-8ab0-ba049030d93d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.883 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape45db34f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost kernel: device tape45db34f-20 left promiscuous mode Oct 14 06:11:36 localhost nova_compute[295778]: 2025-10-14 10:11:36.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.902 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[aa74811e-aab6-49f7-b914-8624a6c0fc5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.918 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[ee58dc17-3bdb-460d-bd67-9ae04f7d1d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.920 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e485db0a-ced9-422d-a95d-1d4d0264ca57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.934 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[495943d8-194e-410a-b2a3-74e57ce138a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1275952, 'reachable_time': 23604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323769, 'error': None, 'target': 'ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.937 162035 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e45db34f-2947-4d1e-954d-d27d42257e3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 14 06:11:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:36.937 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[69e05dce-2d0b-4e9f-ad32-8a441eed8414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 279 KiB/s wr, 50 op/s Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.133 2 DEBUG nova.compute.manager [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.133 2 DEBUG oslo_concurrency.lockutils [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.134 2 DEBUG oslo_concurrency.lockutils [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.134 2 DEBUG oslo_concurrency.lockutils [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.135 2 DEBUG nova.compute.manager [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] No waiting events found dispatching network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.135 2 DEBUG nova.compute.manager [req-ca0135f1-1adc-4758-8bee-830f603da383 req-87276d85-b1d1-4900-989d-36e7e6629ee3 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-unplugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 14 06:11:37 localhost systemd[1]: var-lib-containers-storage-overlay-20140e5c032c2c5295c9e9dd6d0ca62d2e406a4e33b70969c6991603a0543326-merged.mount: Deactivated successfully. Oct 14 06:11:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcbc8d5aedd8eb589427a155615b5cc024e4daa65f3b6e4a4295dde6d50ff970-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:37 localhost systemd[1]: run-netns-ovnmeta\x2de45db34f\x2d2947\x2d4d1e\x2d954d\x2dd27d42257e3e.mount: Deactivated successfully. Oct 14 06:11:37 localhost systemd[1]: var-lib-containers-storage-overlay-7febd8cf85fac29bb2be531e8fbc6265932f8c1081dc1647d7c2e95f1a0f98dc-merged.mount: Deactivated successfully. Oct 14 06:11:37 localhost systemd[1]: run-netns-ovnmeta\x2db031757f\x2df610\x2d486e\x2db256\x2dd0edeb3a8180.mount: Deactivated successfully. Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e87 do_prune osdmap full prune enabled Oct 14 06:11:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e88 e88: 6 total, 6 up, 6 in Oct 14 06:11:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e88: 6 total, 6 up, 6 in Oct 14 06:11:37 localhost nova_compute[295778]: 2025-10-14 10:11:37.629 2 DEBUG nova.storage.rbd_utils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] creating snapshot(snap) on rbd image(1b2e5d1e-4472-445c-9119-cf1b4b529c6d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.339 2 DEBUG nova.network.neutron [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Activated binding for port b622d7fd-00d0-4a03-83ea-2c26ab2e6fae and host np0005486732.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.340 2 DEBUG nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.341 2 DEBUG nova.virt.libvirt.vif [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T10:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-138942356',display_name='tempest-LiveMigrationTest-server-138942356',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='tempest-livemigrationtest-server-138942356',id=8,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-14T10:11:09Z,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4a912863089b4050b50010417538a2b4',ramdisk_id='',reservation_id='r-6hil40u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1892895176',owner_user_name='tempest-LiveMigrationTest-1892895176-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-14T10:11:12Z,user_data=None,user_id='d6d06f9c969f4b25a388e6b1f8e79df2',uuid=daabd3b0-5555-49e7-a72f-51f6e096611a,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.342 2 DEBUG nova.network.os_vif_util [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Converting VIF {"id": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "address": "fa:16:3e:4a:4f:8c", "network": {"id": "b031757f-f610-486e-b256-d0edeb3a8180", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1705330756-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "4a912863089b4050b50010417538a2b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb622d7fd-00", "ovs_interfaceid": "b622d7fd-00d0-4a03-83ea-2c26ab2e6fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.342 2 DEBUG nova.network.os_vif_util [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.343 2 DEBUG os_vif [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb622d7fd-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.352 2 INFO os_vif [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:8c,bridge_name='br-int',has_traffic_filtering=True,id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae,network=Network(b031757f-f610-486e-b256-d0edeb3a8180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb622d7fd-00')#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.353 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.353 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.354 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.354 2 DEBUG nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.355 2 INFO nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Deleting instance files /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a_del#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.355 2 INFO nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Deletion of /var/lib/nova/instances/daabd3b0-5555-49e7-a72f-51f6e096611a_del complete#033[00m Oct 14 06:11:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e88 do_prune osdmap full prune enabled Oct 14 06:11:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e89 e89: 6 total, 6 up, 6 in Oct 14 06:11:38 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.939 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.939 2 INFO nova.compute.manager [-] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] VM Stopped (Lifecycle Event)#033[00m Oct 14 06:11:38 localhost nova_compute[295778]: 2025-10-14 10:11:38.958 2 DEBUG nova.compute.manager [None req-e578eebc-4cf1-48a1-9f70-1f29c1334285 - - - - - -] [instance: 51c986ce-19c4-46c3-80e9-9367d31f15ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 81 KiB/s wr, 18 op/s Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:11:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:11:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:39 localhost podman[323788]: 2025-10-14 10:11:39.557577691 +0000 UTC m=+0.090030365 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:11:39 localhost podman[323788]: 2025-10-14 10:11:39.598188747 +0000 UTC m=+0.130641441 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:11:39 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.914 2 DEBUG nova.compute.manager [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.915 2 DEBUG oslo_concurrency.lockutils [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.915 2 DEBUG oslo_concurrency.lockutils [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.915 2 DEBUG oslo_concurrency.lockutils [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.916 2 DEBUG nova.compute.manager [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] No waiting events found dispatching network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:11:39 localhost nova_compute[295778]: 2025-10-14 10:11:39.916 2 WARNING nova.compute.manager [req-ffb517f7-1304-4bdb-bbec-59cf10119948 req-58732860-7f49-4227-9eac-9c5ac87f4b5f da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Received unexpected event network-vif-plugged-b622d7fd-00d0-4a03-83ea-2c26ab2e6fae for instance with vm_state active and task_state migrating.#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.140 2 INFO nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Snapshot image upload complete#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.140 2 DEBUG nova.compute.manager [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.200 2 INFO nova.compute.manager [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shelve offloading#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.207 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.208 2 DEBUG nova.compute.manager [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.210 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.210 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquired lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.211 2 DEBUG nova.network.neutron [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.293 2 DEBUG nova.network.neutron [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.914 2 DEBUG nova.network.neutron [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.930 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Releasing lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.940 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:11:40 localhost nova_compute[295778]: 2025-10-14 10:11:40.940 2 DEBUG nova.objects.instance [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'resources' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 229 op/s Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.599 2 INFO nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deleting instance files /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_del#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.600 2 INFO nova.virt.libvirt.driver [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deletion of /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_del complete#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.702 2 DEBUG nova.virt.libvirt.host [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.703 2 INFO nova.virt.libvirt.host [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] UEFI support detected#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.741 2 INFO nova.scheduler.client.report [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Deleted allocations for instance cc1adead-5ea6-42fa-9c12-f4d35462f1a5#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.820 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.821 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:41 localhost nova_compute[295778]: 2025-10-14 10:11:41.899 2 DEBUG oslo_concurrency.processutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.048 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Acquiring lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.048 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.049 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "daabd3b0-5555-49e7-a72f-51f6e096611a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.069 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3185783219' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.382 2 DEBUG oslo_concurrency.processutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.389 2 DEBUG nova.compute.provider_tree [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.444 2 DEBUG nova.scheduler.client.report [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.498 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.502 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.502 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.502 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.503 2 DEBUG oslo_concurrency.processutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.579 2 DEBUG oslo_concurrency.lockutils [None req-e5987fac-2b22-4f43-995c-be0738bf1937 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 32.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1564405620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:42 localhost nova_compute[295778]: 2025-10-14 10:11:42.957 2 DEBUG oslo_concurrency.processutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 181 op/s Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.179 2 WARNING nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.181 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11550MB free_disk=41.56388854980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.182 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.182 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.226 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Migration for instance daabd3b0-5555-49e7-a72f-51f6e096611a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.262 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.290 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.291 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.291 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.333 2 DEBUG oslo_concurrency.processutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:11:43 localhost podman[323874]: 2025-10-14 10:11:43.551811439 +0000 UTC m=+0.087795025 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:11:43 localhost podman[323875]: 2025-10-14 10:11:43.606541569 +0000 UTC m=+0.137539643 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:11:43 localhost podman[323875]: 2025-10-14 10:11:43.619049279 +0000 UTC m=+0.150047353 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:11:43 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:11:43 localhost podman[323874]: 2025-10-14 10:11:43.636130471 +0000 UTC m=+0.172113997 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Oct 14 06:11:43 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:11:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4162203872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.807 2 DEBUG oslo_concurrency.processutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:43 localhost nova_compute[295778]: 2025-10-14 10:11:43.815 2 DEBUG nova.compute.provider_tree [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.096 2 DEBUG nova.scheduler.client.report [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.133 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquiring lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.134 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.134 2 INFO nova.compute.manager [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Unshelving#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.139 2 DEBUG nova.compute.resource_tracker [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.139 2 DEBUG oslo_concurrency.lockutils [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.145 2 INFO nova.compute.manager [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Migrating instance to np0005486732.localdomain finished successfully.#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.231 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.231 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.233 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'pci_requests' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.245 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'numa_topology' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.254 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.254 2 INFO nova.compute.claims [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Claim successful on node np0005486731.localdomain#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.257 2 INFO nova.scheduler.client.report [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] Deleted allocation for migration 73c8560b-0e97-4c62-8543-1cd0ed3ebde3#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.257 2 DEBUG nova.virt.libvirt.driver [None req-4b49f4f4-779d-4057-a82a-ecdb68070739 1967610653ee4421bc32ddae91d6f0a2 85e3913d136b45ffb773eb96325628dd - - default default] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.362 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e89 do_prune osdmap full prune enabled Oct 14 06:11:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e90 e90: 6 total, 6 up, 6 in Oct 14 06:11:44 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Oct 14 06:11:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:11:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/262664000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.825 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.832 2 DEBUG nova.compute.provider_tree [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.853 2 DEBUG nova.scheduler.client.report [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.882 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.972 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquiring lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.974 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquired lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:11:44 localhost nova_compute[295778]: 2025-10-14 10:11:44.974 2 DEBUG nova.network.neutron [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.071 2 DEBUG nova.network.neutron [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:11:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 226 op/s Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.297 2 DEBUG nova.network.neutron [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:11:45 localhost neutron_sriov_agent[263389]: 2025-10-14 10:11:45.298 2 INFO neutron.agent.securitygroups_rpc [None req-11c3923f-ead6-4142-90fb-c63bc24f49b8 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Security group member updated ['08e02d40-7eb0-493a-bf38-79869188d51f']#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.311 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Releasing lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.313 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.314 2 INFO nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating image(s)#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.351 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.356 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'trusted_certs' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.411 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.457 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.467 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquiring lock "33a54b1d63fe8981a68a99c0d58697d2d31dfaaa" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.469 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "33a54b1d63fe8981a68a99c0d58697d2d31dfaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.536 2 DEBUG nova.virt.libvirt.imagebackend [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Image locations are: [{'url': 'rbd://fcadf6e2-9176-5818-a8d0-37b19acf8eaf/images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fcadf6e2-9176-5818-a8d0-37b19acf8eaf/images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Oct 14 06:11:45 localhost dnsmasq[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/addn_hosts - 0 addresses Oct 14 06:11:45 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/host Oct 14 06:11:45 localhost dnsmasq-dhcp[321355]: read /var/lib/neutron/dhcp/326e2535-2661-4046-aab8-cd9fa2cc08f1/opts Oct 14 06:11:45 localhost podman[324028]: 2025-10-14 10:11:45.573182106 +0000 UTC m=+0.070636891 container kill ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.647 2 DEBUG nova.virt.libvirt.imagebackend [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Selected location: {'url': 'rbd://fcadf6e2-9176-5818-a8d0-37b19acf8eaf/images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.648 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] cloning images/1b2e5d1e-4472-445c-9119-cf1b4b529c6d@snap to None/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Oct 14 06:11:45 localhost nova_compute[295778]: 2025-10-14 10:11:45.828 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "33a54b1d63fe8981a68a99c0d58697d2d31dfaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.044 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'migration_context' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.135 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] flattening vms/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Oct 14 06:11:46 localhost dnsmasq[321355]: exiting on receipt of SIGTERM Oct 14 06:11:46 localhost podman[324226]: 2025-10-14 10:11:46.475454934 +0000 UTC m=+0.089084490 container kill ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:11:46 localhost systemd[1]: tmp-crun.GeIYrq.mount: Deactivated successfully. Oct 14 06:11:46 localhost systemd[1]: libpod-ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05.scope: Deactivated successfully. Oct 14 06:11:46 localhost ovn_controller[156286]: 2025-10-14T10:11:46Z|00081|binding|INFO|Removing iface tap1990655e-34 ovn-installed in OVS Oct 14 06:11:46 localhost ovn_controller[156286]: 2025-10-14T10:11:46Z|00082|binding|INFO|Removing lport 1990655e-3485-4339-810b-3bca12b6d76b ovn-installed in OVS Oct 14 06:11:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:46.545 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port af3a05e7-dee4-4ed7-a280-37038ee76db0 with type ""#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:46.547 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-326e2535-2661-4046-aab8-cd9fa2cc08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6e7f435b24646ecaa54e485b818329f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bde85ee0-511c-4612-bae5-13cb9e42823c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1990655e-3485-4339-810b-3bca12b6d76b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:46.549 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 1990655e-3485-4339-810b-3bca12b6d76b in datapath 326e2535-2661-4046-aab8-cd9fa2cc08f1 unbound from our chassis#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:46.554 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 326e2535-2661-4046-aab8-cd9fa2cc08f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:46.555 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc583ed-19f2-4aa5-b53e-4ed43dd15b2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:46 localhost podman[324240]: 2025-10-14 10:11:46.570186522 +0000 UTC m=+0.071627708 container died ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:11:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:46 localhost systemd[1]: var-lib-containers-storage-overlay-9f7e8d913a54542bae8268e7c51ab86fa204372ade1110d4da699e8fda260b87-merged.mount: Deactivated successfully. Oct 14 06:11:46 localhost podman[324240]: 2025-10-14 10:11:46.628074884 +0000 UTC m=+0.129516050 container remove ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-326e2535-2661-4046-aab8-cd9fa2cc08f1, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:46 localhost kernel: device tap1990655e-34 left promiscuous mode Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:46 localhost systemd[1]: libpod-conmon-ea57b5f02f619d8b5c7c2aa0b122f9bb0012e9090b035b95f8a997b03ae13e05.scope: Deactivated successfully. Oct 14 06:11:46 localhost systemd[1]: run-netns-qdhcp\x2d326e2535\x2d2661\x2d4046\x2daab8\x2dcd9fa2cc08f1.mount: Deactivated successfully. Oct 14 06:11:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:46.697 270389 INFO neutron.agent.dhcp.agent [None req-5d3474a4-1459-440e-9dee-21a188d8d7e8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:11:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:46.923 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.950 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Image rbd:vms/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.951 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.952 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Ensure instance console log exists: /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.952 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.952 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.953 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.955 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T10:11:10Z,direct_url=,disk_format='raw',id=1b2e5d1e-4472-445c-9119-cf1b4b529c6d,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-766913962-shelved',owner='09d62a810b754dce9a74b97c3df09013',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-10-14T10:11:39Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.960 2 WARNING nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.962 2 DEBUG nova.virt.libvirt.host [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.966 2 DEBUG nova.virt.libvirt.host [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.969 2 DEBUG nova.virt.libvirt.host [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.969 2 DEBUG nova.virt.libvirt.host [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.970 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.971 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T10:09:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3d2e2556-398d-47fa-b582-04a393026796',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T10:11:10Z,direct_url=,disk_format='raw',id=1b2e5d1e-4472-445c-9119-cf1b4b529c6d,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-766913962-shelved',owner='09d62a810b754dce9a74b97c3df09013',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-10-14T10:11:39Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.971 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.972 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.972 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.972 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.973 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.973 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.973 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.974 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.974 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.974 2 DEBUG nova.virt.hardware [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.975 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'vcpu_model' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:46 localhost nova_compute[295778]: 2025-10-14 10:11:46.992 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 5.6 MiB/s rd, 5.5 MiB/s wr, 200 op/s Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2099089524' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.450 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.484 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.489 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:11:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3622276837' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.904 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.906 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'pci_devices' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.925 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] End _get_guest_xml xml= Oct 14 06:11:47 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:47 localhost nova_compute[295778]: instance-00000007 Oct 14 06:11:47 localhost nova_compute[295778]: 131072 Oct 14 06:11:47 localhost nova_compute[295778]: 1 Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-server-766913962 Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:46 Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: 128 Oct 14 06:11:47 localhost nova_compute[295778]: 1 Oct 14 06:11:47 localhost nova_compute[295778]: 0 Oct 14 06:11:47 localhost nova_compute[295778]: 0 Oct 14 06:11:47 localhost nova_compute[295778]: 1 Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-643946357-project-member Oct 14 06:11:47 localhost nova_compute[295778]: tempest-UnshelveToHostMultiNodesTest-643946357 Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: RDO Oct 14 06:11:47 localhost nova_compute[295778]: OpenStack Compute Oct 14 06:11:47 localhost nova_compute[295778]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 14 06:11:47 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:47 localhost nova_compute[295778]: cc1adead-5ea6-42fa-9c12-f4d35462f1a5 Oct 14 06:11:47 localhost nova_compute[295778]: Virtual Machine Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: hvm Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: /dev/urandom Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: Oct 14 06:11:47 localhost nova_compute[295778]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.978 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.979 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:11:47 localhost nova_compute[295778]: 2025-10-14 10:11:47.980 2 INFO nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Using config drive#033[00m Oct 14 06:11:47 localhost neutron_sriov_agent[263389]: 2025-10-14 10:11:47.991 2 INFO neutron.agent.securitygroups_rpc [None req-9ee942bf-f520-45b5-876d-66b5a8e4d8b6 4a2c72478a7c4747a73158cd8119b6ba d6e7f435b24646ecaa54e485b818329f - - default default] Security group member updated ['08e02d40-7eb0-493a-bf38-79869188d51f']#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.018 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.044 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'ec2_ids' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.076 2 DEBUG nova.objects.instance [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lazy-loading 'keypairs' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.145 2 INFO nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Creating config drive at /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.151 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacs9h62a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.290 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacs9h62a" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.332 2 DEBUG nova.storage.rbd_utils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] rbd image cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.339 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.559 2 DEBUG oslo_concurrency.processutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.560 2 INFO nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deleting local config drive /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5/disk.config because it was imported into RBD.#033[00m Oct 14 06:11:48 localhost systemd-machined[205044]: New machine qemu-4-instance-00000007. Oct 14 06:11:48 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000007. Oct 14 06:11:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:11:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/615503913' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:11:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:11:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/615503913' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.881 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.883 2 INFO nova.compute.manager [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Stopped (Lifecycle Event)#033[00m Oct 14 06:11:48 localhost nova_compute[295778]: 2025-10-14 10:11:48.903 2 DEBUG nova.compute.manager [None req-eafb3a5c-127d-4ad5-a602-1d288bfc7deb - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 170 op/s Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.447 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.448 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Resumed (Lifecycle Event)#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.451 2 DEBUG nova.compute.manager [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.452 2 DEBUG nova.virt.libvirt.driver [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.455 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance spawned successfully.#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.467 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.470 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.493 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.493 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.494 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Started (Lifecycle Event)#033[00m Oct 14 06:11:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.526 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.532 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:11:49 localhost nova_compute[295778]: 2025-10-14 10:11:49.548 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:11:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:11:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:11:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e90 do_prune osdmap full prune enabled Oct 14 06:11:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e91 e91: 6 total, 6 up, 6 in Oct 14 06:11:50 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e91: 6 total, 6 up, 6 in Oct 14 06:11:50 localhost podman[324445]: 2025-10-14 10:11:50.635192474 +0000 UTC m=+0.153704530 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:11:50 localhost podman[324444]: 2025-10-14 10:11:50.59802377 +0000 UTC m=+0.121798776 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid) Oct 14 06:11:50 localhost podman[324444]: 2025-10-14 10:11:50.677045523 +0000 UTC m=+0.200820599 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 14 06:11:50 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:11:50 localhost podman[324445]: 2025-10-14 10:11:50.729545632 +0000 UTC m=+0.248057688 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:11:50 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:11:51 localhost nova_compute[295778]: 2025-10-14 10:11:51.126 2 DEBUG nova.compute.manager [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 229 op/s Oct 14 06:11:51 localhost nova_compute[295778]: 2025-10-14 10:11:51.264 2 DEBUG oslo_concurrency.lockutils [None req-1e4dd150-a9b0-422c-92e6-274b3785001c 24797af2e34f44319684f3ba243636a3 b501f065f5f444c2ae972347ec63d69c - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 7.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:51.376 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:10:50Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b622d7fd-00d0-4a03-83ea-2c26ab2e6fae, ip_allocation=immediate, mac_address=fa:16:3e:4a:4f:8c, name=tempest-parent-145339109, network_id=b031757f-f610-486e-b256-d0edeb3a8180, port_security_enabled=True, project_id=4a912863089b4050b50010417538a2b4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=13, security_groups=['f4a71cc4-401e-4fd9-a76d-664285c1f988'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=4a912863089b4050b50010417538a2b4, trunk_details=sub_ports=[], trunk_id=7953f0af-3e00-4aa5-8261-15e5663a4a9c, updated_at=2025-10-14T10:11:50Z on network b031757f-f610-486e-b256-d0edeb3a8180#033[00m Oct 14 06:11:51 localhost nova_compute[295778]: 2025-10-14 10:11:51.388 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:11:51 localhost nova_compute[295778]: 2025-10-14 10:11:51.389 2 INFO nova.compute.manager [-] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] VM Stopped (Lifecycle Event)#033[00m Oct 14 06:11:51 localhost nova_compute[295778]: 2025-10-14 10:11:51.407 2 DEBUG nova.compute.manager [None req-c1a1d923-5505-4fa1-bd57-d43897f73d82 - - - - - -] [instance: daabd3b0-5555-49e7-a72f-51f6e096611a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:11:51 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 2 addresses Oct 14 06:11:51 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:11:51 localhost podman[324498]: 2025-10-14 10:11:51.673638277 +0000 UTC m=+0.084910699 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:11:51 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:11:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:52.100 270389 INFO neutron.agent.dhcp.agent [None req-18f66aef-c1d2-401f-a248-5717732f1bf4 - - - - - -] DHCP configuration for ports {'b622d7fd-00d0-4a03-83ea-2c26ab2e6fae'} is completed#033[00m Oct 14 06:11:52 localhost nova_compute[295778]: 2025-10-14 10:11:52.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:52 localhost nova_compute[295778]: 2025-10-14 10:11:52.625 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:52 localhost nova_compute[295778]: 2025-10-14 10:11:52.626 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:52 localhost nova_compute[295778]: 2025-10-14 10:11:52.627 2 INFO nova.compute.manager [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shelving#033[00m Oct 14 06:11:52 localhost nova_compute[295778]: 2025-10-14 10:11:52.657 2 DEBUG nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Oct 14 06:11:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 5.7 MiB/s rd, 5.4 MiB/s wr, 213 op/s Oct 14 06:11:53 localhost nova_compute[295778]: 2025-10-14 10:11:53.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:11:54.083 2 INFO neutron.agent.securitygroups_rpc [None req-cfceca67-8a1d-4506-8406-61d97cd102f7 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Security group member updated ['f4a71cc4-401e-4fd9-a76d-664285c1f988']#033[00m Oct 14 06:11:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 240 op/s Oct 14 06:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:11:55 localhost systemd[1]: tmp-crun.W3prNk.mount: Deactivated successfully. Oct 14 06:11:55 localhost podman[324520]: 2025-10-14 10:11:55.560230696 +0000 UTC m=+0.095518129 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, distribution-scope=public, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 06:11:55 localhost podman[324520]: 2025-10-14 10:11:55.6007915 +0000 UTC m=+0.136078873 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 06:11:55 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:11:55 localhost podman[324521]: 2025-10-14 10:11:55.603293326 +0000 UTC m=+0.135890638 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:11:55 localhost podman[324522]: 2025-10-14 10:11:55.661633271 +0000 UTC m=+0.189149829 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:11:55 localhost podman[324522]: 2025-10-14 10:11:55.676537245 +0000 UTC m=+0.204053763 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:11:55 localhost podman[324521]: 2025-10-14 10:11:55.686345465 +0000 UTC m=+0.218942727 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:11:55 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:11:55 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:11:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:11:55.735 2 INFO neutron.agent.securitygroups_rpc [None req-ab79e25f-ba50-4f71-a6eb-077d18b144c8 d6d06f9c969f4b25a388e6b1f8e79df2 4a912863089b4050b50010417538a2b4 - - default default] Security group member updated ['f4a71cc4-401e-4fd9-a76d-664285c1f988']#033[00m Oct 14 06:11:55 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 1 addresses Oct 14 06:11:55 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:11:55 localhost podman[324603]: 2025-10-14 10:11:55.987036806 +0000 UTC m=+0.071844443 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:11:55 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:11:56 localhost dnsmasq[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/addn_hosts - 0 addresses Oct 14 06:11:56 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/host Oct 14 06:11:56 localhost dnsmasq-dhcp[320864]: read /var/lib/neutron/dhcp/b031757f-f610-486e-b256-d0edeb3a8180/opts Oct 14 06:11:56 localhost podman[324642]: 2025-10-14 10:11:56.668787876 +0000 UTC m=+0.057363540 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:11:56 localhost nova_compute[295778]: 2025-10-14 10:11:56.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:56 localhost ovn_controller[156286]: 2025-10-14T10:11:56Z|00083|binding|INFO|Releasing lport 48199f38-fd12-4dec-9835-6635c7e5c5a7 from this chassis (sb_readonly=0) Oct 14 06:11:56 localhost ovn_controller[156286]: 2025-10-14T10:11:56Z|00084|binding|INFO|Setting lport 48199f38-fd12-4dec-9835-6635c7e5c5a7 down in Southbound Oct 14 06:11:56 localhost kernel: device tap48199f38-fd left promiscuous mode Oct 14 06:11:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:56.848 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b031757f-f610-486e-b256-d0edeb3a8180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a912863089b4050b50010417538a2b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adbcad8c-50ba-42d0-91a9-e7edd5a551da, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48199f38-fd12-4dec-9835-6635c7e5c5a7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:11:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:56.850 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 48199f38-fd12-4dec-9835-6635c7e5c5a7 in datapath b031757f-f610-486e-b256-d0edeb3a8180 unbound from our chassis#033[00m Oct 14 06:11:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:56.854 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b031757f-f610-486e-b256-d0edeb3a8180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:11:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:56.855 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b2e2a6-9f11-48ce-9387-204a6a6e8190]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:11:56 localhost nova_compute[295778]: 2025-10-14 10:11:56.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 240 op/s Oct 14 06:11:57 localhost nova_compute[295778]: 2025-10-14 10:11:57.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:57.638 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:11:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:57.638 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:11:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:11:57.638 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:11:58 localhost nova_compute[295778]: 2025-10-14 10:11:58.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:59 localhost nova_compute[295778]: 2025-10-14 10:11:59.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:11:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 240 op/s Oct 14 06:11:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:11:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e91 do_prune osdmap full prune enabled Oct 14 06:11:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e92 e92: 6 total, 6 up, 6 in Oct 14 06:11:59 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in Oct 14 06:11:59 localhost dnsmasq[320864]: exiting on receipt of SIGTERM Oct 14 06:11:59 localhost podman[324679]: 2025-10-14 10:11:59.597827143 +0000 UTC m=+0.067764715 container kill 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:11:59 localhost systemd[1]: libpod-9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58.scope: Deactivated successfully. Oct 14 06:11:59 localhost podman[324693]: 2025-10-14 10:11:59.688856433 +0000 UTC m=+0.073824256 container died 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:11:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58-userdata-shm.mount: Deactivated successfully. Oct 14 06:11:59 localhost podman[324693]: 2025-10-14 10:11:59.731136803 +0000 UTC m=+0.116104696 container cleanup 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:11:59 localhost systemd[1]: libpod-conmon-9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58.scope: Deactivated successfully. Oct 14 06:11:59 localhost podman[324695]: 2025-10-14 10:11:59.757939152 +0000 UTC m=+0.134653456 container remove 9adcc5a992f0396fda52c97d1041efebfdc78c127e0aac9af84caa0cf0bc4e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b031757f-f610-486e-b256-d0edeb3a8180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:11:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:11:59.788 270389 INFO neutron.agent.dhcp.agent [None req-a67dd5e2-f624-4ada-811f-7c81229081c2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:00.103 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:00 localhost podman[246584]: time="2025-10-14T10:12:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:12:00 localhost systemd[1]: var-lib-containers-storage-overlay-247d19e5e405a2c3e489ac20d85c13b38e92abef18e4fcf8019011d07fea3338-merged.mount: Deactivated successfully. Oct 14 06:12:00 localhost systemd[1]: run-netns-qdhcp\x2db031757f\x2df610\x2d486e\x2db256\x2dd0edeb3a8180.mount: Deactivated successfully. Oct 14 06:12:00 localhost podman[246584]: @ - - [14/Oct/2025:10:12:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146312 "" "Go-http-client/1.1" Oct 14 06:12:00 localhost podman[246584]: @ - - [14/Oct/2025:10:12:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19350 "" "Go-http-client/1.1" Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00085|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00086|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00087|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:00.751 270389 INFO neutron.agent.linux.ip_lib [None req-768df673-98ca-4f25-8914-27357c867ad8 - - - - - -] Device tap246ed0bf-2d cannot be used as it has no MAC address#033[00m Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost kernel: device tap246ed0bf-2d entered promiscuous mode Oct 14 06:12:00 localhost systemd-udevd[324750]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:12:00 localhost NetworkManager[5972]: [1760436720.7840] manager: (tap246ed0bf-2d): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00088|binding|INFO|Claiming lport 246ed0bf-2dad-459b-b388-d7c73000c67a for this chassis. Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00089|binding|INFO|246ed0bf-2dad-459b-b388-d7c73000c67a: Claiming unknown Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:00.799 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ddc78d4b-b803-455e-9391-1c0ccb5ab584', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddc78d4b-b803-455e-9391-1c0ccb5ab584', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ccc6bab21fc41d1aa6b1c0671853cd5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=663d4bd2-dd10-43c8-8599-8e134395bf17, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=246ed0bf-2dad-459b-b388-d7c73000c67a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:00.800 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 246ed0bf-2dad-459b-b388-d7c73000c67a in datapath ddc78d4b-b803-455e-9391-1c0ccb5ab584 bound to our chassis#033[00m Oct 14 06:12:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:00.801 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6f4e168e-c0c0-412a-8275-5c4668a23581 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:12:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:00.801 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddc78d4b-b803-455e-9391-1c0ccb5ab584, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:12:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:00.802 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7a76241e-b821-483c-b1b0-30104c570c32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00090|binding|INFO|Setting lport 246ed0bf-2dad-459b-b388-d7c73000c67a ovn-installed in OVS Oct 14 06:12:00 localhost ovn_controller[156286]: 2025-10-14T10:12:00Z|00091|binding|INFO|Setting lport 246ed0bf-2dad-459b-b388-d7c73000c67a up in Southbound Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost journal[236030]: ethtool ioctl error on tap246ed0bf-2d: No such device Oct 14 06:12:00 localhost systemd[1]: tmp-crun.p3A1JB.mount: Deactivated successfully. Oct 14 06:12:00 localhost dnsmasq[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/addn_hosts - 0 addresses Oct 14 06:12:00 localhost podman[324749]: 2025-10-14 10:12:00.862588488 +0000 UTC m=+0.072069399 container kill afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:12:00 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/host Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:00 localhost dnsmasq-dhcp[321102]: read /var/lib/neutron/dhcp/ba133567-4ba1-4d96-820a-7959b7dc36a2/opts Oct 14 06:12:00 localhost nova_compute[295778]: 2025-10-14 10:12:00.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:01 localhost nova_compute[295778]: 2025-10-14 10:12:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:01 localhost ovn_controller[156286]: 2025-10-14T10:12:01Z|00092|binding|INFO|Releasing lport 282e238e-dd4a-4ab2-b9f4-b7da821184de from this chassis (sb_readonly=0) Oct 14 06:12:01 localhost ovn_controller[156286]: 2025-10-14T10:12:01Z|00093|binding|INFO|Setting lport 282e238e-dd4a-4ab2-b9f4-b7da821184de down in Southbound Oct 14 06:12:01 localhost kernel: device tap282e238e-dd left promiscuous mode Oct 14 06:12:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:01.031 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ba133567-4ba1-4d96-820a-7959b7dc36a2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba133567-4ba1-4d96-820a-7959b7dc36a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85e3913d136b45ffb773eb96325628dd', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7024b04b-2440-4a06-b6d2-b00d9850a0f2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=282e238e-dd4a-4ab2-b9f4-b7da821184de) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:01.032 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 282e238e-dd4a-4ab2-b9f4-b7da821184de in datapath ba133567-4ba1-4d96-820a-7959b7dc36a2 unbound from our chassis#033[00m Oct 14 06:12:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:01.034 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba133567-4ba1-4d96-820a-7959b7dc36a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:12:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:01.035 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c05b1222-84e0-4668-9ae8-1df8f3c1e11f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:01 localhost nova_compute[295778]: 2025-10-14 10:12:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 818 B/s wr, 90 op/s Oct 14 06:12:01 localhost podman[324843]: Oct 14 06:12:01 localhost podman[324843]: 2025-10-14 10:12:01.761768614 +0000 UTC m=+0.088706510 container create b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:01 localhost podman[324843]: 2025-10-14 10:12:01.716953338 +0000 UTC m=+0.043891314 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:12:01 localhost systemd[1]: Started libpod-conmon-b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561.scope. Oct 14 06:12:01 localhost systemd[1]: tmp-crun.sp6LtC.mount: Deactivated successfully. Oct 14 06:12:01 localhost systemd[1]: Started libcrun container. Oct 14 06:12:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a958a2a61aadf70b4d83af961c67fa2fd6334bf681975e49bcde0199339b3c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:12:01 localhost podman[324843]: 2025-10-14 10:12:01.865090989 +0000 UTC m=+0.192028895 container init b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:12:01 localhost podman[324843]: 2025-10-14 10:12:01.873928414 +0000 UTC m=+0.200866310 container start b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:12:01 localhost dnsmasq[324862]: started, version 2.85 cachesize 150 Oct 14 06:12:01 localhost dnsmasq[324862]: DNS service limited to local subnets Oct 14 06:12:01 localhost dnsmasq[324862]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:12:01 localhost dnsmasq[324862]: warning: no upstream servers configured Oct 14 06:12:01 localhost dnsmasq-dhcp[324862]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:12:01 localhost dnsmasq[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/addn_hosts - 0 addresses Oct 14 06:12:01 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/host Oct 14 06:12:01 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/opts Oct 14 06:12:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:02.015 270389 INFO neutron.agent.dhcp.agent [None req-c7163dff-30e0-4c15-977e-1e98db79feaa - - - - - -] DHCP configuration for ports {'e11b1c46-a2a3-484c-be86-d3b925474bbd'} is completed#033[00m Oct 14 06:12:02 localhost nova_compute[295778]: 2025-10-14 10:12:02.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:02 localhost nova_compute[295778]: 2025-10-14 10:12:02.723 2 DEBUG nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Oct 14 06:12:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:02.964 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:02Z, description=, device_id=82ce39a3-0c7e-4492-9620-2979cac3b1f9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ccefc6ec-054f-44a2-9048-02134e8ed9f4, ip_allocation=immediate, mac_address=fa:16:3e:56:ad:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:11:58Z, description=, dns_domain=, id=ddc78d4b-b803-455e-9391-1c0ccb5ab584, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1551349768-network, port_security_enabled=True, project_id=0ccc6bab21fc41d1aa6b1c0671853cd5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15799, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=599, status=ACTIVE, subnets=['f789a725-cdc2-4038-8bd1-9df362f65359'], tags=[], tenant_id=0ccc6bab21fc41d1aa6b1c0671853cd5, updated_at=2025-10-14T10:11:59Z, vlan_transparent=None, network_id=ddc78d4b-b803-455e-9391-1c0ccb5ab584, port_security_enabled=False, project_id=0ccc6bab21fc41d1aa6b1c0671853cd5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=617, status=DOWN, tags=[], tenant_id=0ccc6bab21fc41d1aa6b1c0671853cd5, updated_at=2025-10-14T10:12:02Z on network ddc78d4b-b803-455e-9391-1c0ccb5ab584#033[00m Oct 14 06:12:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 818 B/s wr, 90 op/s Oct 14 06:12:03 localhost podman[324880]: 2025-10-14 10:12:03.208814515 +0000 UTC m=+0.056508987 container kill b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:12:03 localhost dnsmasq[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/addn_hosts - 1 addresses Oct 14 06:12:03 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/host Oct 14 06:12:03 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/opts Oct 14 06:12:03 localhost openstack_network_exporter[248748]: ERROR 10:12:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:12:03 localhost openstack_network_exporter[248748]: ERROR 10:12:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:12:03 localhost openstack_network_exporter[248748]: ERROR 10:12:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:12:03 localhost openstack_network_exporter[248748]: ERROR 10:12:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:12:03 localhost openstack_network_exporter[248748]: Oct 14 06:12:03 localhost openstack_network_exporter[248748]: ERROR 10:12:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:12:03 localhost openstack_network_exporter[248748]: Oct 14 06:12:03 localhost nova_compute[295778]: 2025-10-14 10:12:03.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:03.482 270389 INFO neutron.agent.dhcp.agent [None req-78788243-ee5b-4098-b649-ab1d059f5e96 - - - - - -] DHCP configuration for ports {'ccefc6ec-054f-44a2-9048-02134e8ed9f4'} is completed#033[00m Oct 14 06:12:03 localhost dnsmasq[321102]: exiting on receipt of SIGTERM Oct 14 06:12:03 localhost systemd[1]: libpod-afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d.scope: Deactivated successfully. Oct 14 06:12:03 localhost podman[324918]: 2025-10-14 10:12:03.816800801 +0000 UTC m=+0.060822131 container kill afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:12:03 localhost podman[324930]: 2025-10-14 10:12:03.889158307 +0000 UTC m=+0.062068634 container died afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:12:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d-userdata-shm.mount: Deactivated successfully. Oct 14 06:12:03 localhost podman[324930]: 2025-10-14 10:12:03.928235142 +0000 UTC m=+0.101145419 container cleanup afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:12:03 localhost systemd[1]: libpod-conmon-afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d.scope: Deactivated successfully. Oct 14 06:12:03 localhost podman[324937]: 2025-10-14 10:12:03.994787034 +0000 UTC m=+0.154600425 container remove afe7f31c3eaa45ad15c0c68206edf8c605a386c77961dfc434ac1c76dbb2e25d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba133567-4ba1-4d96-820a-7959b7dc36a2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:12:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:04.048 270389 INFO neutron.agent.dhcp.agent [None req-5e042ee1-c679-47bf-b6c9-26a873bed239 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:04.513 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:02Z, description=, device_id=82ce39a3-0c7e-4492-9620-2979cac3b1f9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ccefc6ec-054f-44a2-9048-02134e8ed9f4, ip_allocation=immediate, mac_address=fa:16:3e:56:ad:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:11:58Z, description=, dns_domain=, id=ddc78d4b-b803-455e-9391-1c0ccb5ab584, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1551349768-network, port_security_enabled=True, project_id=0ccc6bab21fc41d1aa6b1c0671853cd5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15799, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=599, status=ACTIVE, subnets=['f789a725-cdc2-4038-8bd1-9df362f65359'], tags=[], tenant_id=0ccc6bab21fc41d1aa6b1c0671853cd5, updated_at=2025-10-14T10:11:59Z, vlan_transparent=None, network_id=ddc78d4b-b803-455e-9391-1c0ccb5ab584, port_security_enabled=False, project_id=0ccc6bab21fc41d1aa6b1c0671853cd5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=617, status=DOWN, tags=[], tenant_id=0ccc6bab21fc41d1aa6b1c0671853cd5, updated_at=2025-10-14T10:12:02Z on network ddc78d4b-b803-455e-9391-1c0ccb5ab584#033[00m Oct 14 06:12:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:04.518 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:04 localhost dnsmasq[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/addn_hosts - 1 addresses Oct 14 06:12:04 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/host Oct 14 06:12:04 localhost podman[324974]: 2025-10-14 10:12:04.758320598 +0000 UTC m=+0.061779257 container kill b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:12:04 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/opts Oct 14 06:12:04 localhost nova_compute[295778]: 2025-10-14 10:12:04.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:04 localhost systemd[1]: var-lib-containers-storage-overlay-0b022c8eb6125a41a006f39be92961a1e2880d38e62a27b62b58c103edf5fe65-merged.mount: Deactivated successfully. Oct 14 06:12:04 localhost systemd[1]: run-netns-qdhcp\x2dba133567\x2d4ba1\x2d4d96\x2d820a\x2d7959b7dc36a2.mount: Deactivated successfully. Oct 14 06:12:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:05.048 270389 INFO neutron.agent.dhcp.agent [None req-393fef8d-9660-4183-a634-a40ab0752b26 - - - - - -] DHCP configuration for ports {'ccefc6ec-054f-44a2-9048-02134e8ed9f4'} is completed#033[00m Oct 14 06:12:05 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully. Oct 14 06:12:05 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 13.562s CPU time. Oct 14 06:12:05 localhost systemd-machined[205044]: Machine qemu-4-instance-00000007 terminated. Oct 14 06:12:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 226 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 637 KiB/s rd, 35 KiB/s wr, 55 op/s Oct 14 06:12:05 localhost ovn_controller[156286]: 2025-10-14T10:12:05Z|00094|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:12:05 localhost ovn_controller[156286]: 2025-10-14T10:12:05Z|00095|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:12:05 localhost ovn_controller[156286]: 2025-10-14T10:12:05Z|00096|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.740 2 INFO nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance shutdown successfully after 13 seconds.#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.747 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.747 2 DEBUG nova.objects.instance [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'numa_topology' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:12:05 localhost nova_compute[295778]: 2025-10-14 10:12:05.826 2 INFO nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Beginning cold snapshot process#033[00m Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.012 2 DEBUG nova.virt.libvirt.imagebackend [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] No parent info for 4d7273e1-0c4b-46b6-bdfa-9a43be3f063a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.106 2 DEBUG nova.storage.rbd_utils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] creating snapshot(ff7dd5f43a0144fe950bd741990f989e) on rbd image(cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e92 do_prune osdmap full prune enabled Oct 14 06:12:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e93 e93: 6 total, 6 up, 6 in Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e93: 6 total, 6 up, 6 in Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.669 2 DEBUG nova.storage.rbd_utils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] cloning vms/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk@ff7dd5f43a0144fe950bd741990f989e to images/ce19c57b-7e47-4122-9915-16bcf85be863 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Oct 14 06:12:06 localhost nova_compute[295778]: 2025-10-14 10:12:06.847 2 DEBUG nova.storage.rbd_utils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] flattening images/ce19c57b-7e47-4122-9915-16bcf85be863 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Oct 14 06:12:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 177 active+clean; 226 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 796 KiB/s rd, 44 KiB/s wr, 69 op/s Oct 14 06:12:07 localhost nova_compute[295778]: 2025-10-14 10:12:07.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:07 localhost nova_compute[295778]: 2025-10-14 10:12:07.658 2 DEBUG nova.storage.rbd_utils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] removing snapshot(ff7dd5f43a0144fe950bd741990f989e) on rbd image(cc1adead-5ea6-42fa-9c12-f4d35462f1a5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Oct 14 06:12:08 localhost nova_compute[295778]: 2025-10-14 10:12:08.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e93 do_prune osdmap full prune enabled Oct 14 06:12:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e94 e94: 6 total, 6 up, 6 in Oct 14 06:12:08 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e94: 6 total, 6 up, 6 in Oct 14 06:12:08 localhost nova_compute[295778]: 2025-10-14 10:12:08.677 2 DEBUG nova.storage.rbd_utils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] creating snapshot(snap) on rbd image(ce19c57b-7e47-4122-9915-16bcf85be863) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Oct 14 06:12:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:12:09 Oct 14 06:12:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:12:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:12:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'backups', 'manila_data', '.mgr', 'vms', 'images'] Oct 14 06:12:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:12:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 226 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 796 KiB/s rd, 44 KiB/s wr, 69 op/s Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006581845861250698 of space, bias 1.0, pg target 1.3163691722501396 quantized to 32 (current 32) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:12:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:12:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:12:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e94 do_prune osdmap full prune enabled Oct 14 06:12:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e95 e95: 6 total, 6 up, 6 in Oct 14 06:12:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e95: 6 total, 6 up, 6 in Oct 14 06:12:10 localhost ovn_controller[156286]: 2025-10-14T10:12:10Z|00097|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:12:10 localhost ovn_controller[156286]: 2025-10-14T10:12:10Z|00098|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:12:10 localhost ovn_controller[156286]: 2025-10-14T10:12:10Z|00099|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:10 localhost dnsmasq[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/addn_hosts - 0 addresses Oct 14 06:12:10 localhost podman[325159]: 2025-10-14 10:12:10.175241244 +0000 UTC m=+0.065189297 container kill b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:12:10 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/host Oct 14 06:12:10 localhost dnsmasq-dhcp[324862]: read /var/lib/neutron/dhcp/ddc78d4b-b803-455e-9391-1c0ccb5ab584/opts Oct 14 06:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:12:10 localhost podman[325174]: 2025-10-14 10:12:10.296169026 +0000 UTC m=+0.093857836 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Oct 14 06:12:10 localhost podman[325174]: 2025-10-14 10:12:10.309710564 +0000 UTC m=+0.107399354 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:12:10 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:10 localhost kernel: device tap246ed0bf-2d left promiscuous mode Oct 14 06:12:10 localhost ovn_controller[156286]: 2025-10-14T10:12:10Z|00100|binding|INFO|Releasing lport 246ed0bf-2dad-459b-b388-d7c73000c67a from this chassis (sb_readonly=0) Oct 14 06:12:10 localhost ovn_controller[156286]: 2025-10-14T10:12:10Z|00101|binding|INFO|Setting lport 246ed0bf-2dad-459b-b388-d7c73000c67a down in Southbound Oct 14 06:12:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:10.364 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ddc78d4b-b803-455e-9391-1c0ccb5ab584', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddc78d4b-b803-455e-9391-1c0ccb5ab584', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ccc6bab21fc41d1aa6b1c0671853cd5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=663d4bd2-dd10-43c8-8599-8e134395bf17, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=246ed0bf-2dad-459b-b388-d7c73000c67a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:10.366 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 246ed0bf-2dad-459b-b388-d7c73000c67a in datapath ddc78d4b-b803-455e-9391-1c0ccb5ab584 unbound from our chassis#033[00m Oct 14 06:12:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:10.370 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddc78d4b-b803-455e-9391-1c0ccb5ab584, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:12:10 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:10.372 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[db614f60-4f9d-417c-ac2f-0a5d986257af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.402 2 INFO nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Snapshot image upload complete#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.403 2 DEBUG nova.compute.manager [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.463 2 INFO nova.compute.manager [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Shelve offloading#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.470 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.471 2 DEBUG nova.compute.manager [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.474 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.474 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquired lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.475 2 DEBUG nova.network.neutron [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.523 2 DEBUG nova.network.neutron [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.844 2 DEBUG nova.network.neutron [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.870 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Releasing lock "refresh_cache-cc1adead-5ea6-42fa-9c12-f4d35462f1a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.880 2 INFO nova.virt.libvirt.driver [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Instance destroyed successfully.#033[00m Oct 14 06:12:10 localhost nova_compute[295778]: 2025-10-14 10:12:10.881 2 DEBUG nova.objects.instance [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lazy-loading 'resources' on Instance uuid cc1adead-5ea6-42fa-9c12-f4d35462f1a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:12:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 160 op/s Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.506 2 INFO nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deleting instance files /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_del#033[00m Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.507 2 INFO nova.virt.libvirt.driver [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Deletion of /var/lib/nova/instances/cc1adead-5ea6-42fa-9c12-f4d35462f1a5_del complete#033[00m Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.595 2 INFO nova.scheduler.client.report [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Deleted allocations for instance cc1adead-5ea6-42fa-9c12-f4d35462f1a5#033[00m Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.644 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.645 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:11 localhost nova_compute[295778]: 2025-10-14 10:12:11.665 2 DEBUG oslo_concurrency.processutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:12:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3443718495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.127 2 DEBUG oslo_concurrency.processutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.132 2 DEBUG nova.compute.provider_tree [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.151 2 DEBUG nova.scheduler.client.report [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.180 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:12 localhost dnsmasq[324862]: exiting on receipt of SIGTERM Oct 14 06:12:12 localhost systemd[1]: libpod-b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561.scope: Deactivated successfully. Oct 14 06:12:12 localhost podman[325259]: 2025-10-14 10:12:12.217587657 +0000 UTC m=+0.060563135 container kill b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.237 2 DEBUG oslo_concurrency.lockutils [None req-48e5f319-b3e8-40a6-a9e9-9e92f3eb7ff9 2b68997505ae4e5eb94e8eb7def7754d 09d62a810b754dce9a74b97c3df09013 - - default default] Lock "cc1adead-5ea6-42fa-9c12-f4d35462f1a5" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 19.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:12 localhost podman[325271]: 2025-10-14 10:12:12.288669318 +0000 UTC m=+0.054551505 container died b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:12 localhost systemd[1]: tmp-crun.sILSPg.mount: Deactivated successfully. Oct 14 06:12:12 localhost podman[325271]: 2025-10-14 10:12:12.328289257 +0000 UTC m=+0.094171414 container cleanup b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:12:12 localhost systemd[1]: libpod-conmon-b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561.scope: Deactivated successfully. Oct 14 06:12:12 localhost podman[325273]: 2025-10-14 10:12:12.362710509 +0000 UTC m=+0.125192156 container remove b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ddc78d4b-b803-455e-9391-1c0ccb5ab584, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:12:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:12.390 270389 INFO neutron.agent.dhcp.agent [None req-c28b6178-ed77-47be-a559-6bc4915ac295 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:12.566 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:12 localhost nova_compute[295778]: 2025-10-14 10:12:12.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 146 op/s Oct 14 06:12:13 localhost systemd[1]: var-lib-containers-storage-overlay-1a958a2a61aadf70b4d83af961c67fa2fd6334bf681975e49bcde0199339b3c6-merged.mount: Deactivated successfully. Oct 14 06:12:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b422013329b1fa5c263d38302e43e8c5b54d1fc3e0f85c051b638b78c3003561-userdata-shm.mount: Deactivated successfully. Oct 14 06:12:13 localhost systemd[1]: run-netns-qdhcp\x2dddc78d4b\x2db803\x2d455e\x2d9391\x2d1c0ccb5ab584.mount: Deactivated successfully. Oct 14 06:12:13 localhost nova_compute[295778]: 2025-10-14 10:12:13.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:12:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e95 do_prune osdmap full prune enabled Oct 14 06:12:14 localhost podman[325299]: 2025-10-14 10:12:14.561933564 +0000 UTC m=+0.093806545 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Oct 14 06:12:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e96 e96: 6 total, 6 up, 6 in Oct 14 06:12:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e96: 6 total, 6 up, 6 in Oct 14 06:12:14 localhost podman[325299]: 2025-10-14 10:12:14.599220341 +0000 UTC m=+0.131093322 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:12:14 localhost podman[325300]: 2025-10-14 10:12:14.617960438 +0000 UTC m=+0.143305675 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:12:14 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:12:14 localhost podman[325300]: 2025-10-14 10:12:14.63621922 +0000 UTC m=+0.161564437 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:12:14 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:12:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 281 MiB data, 938 MiB used, 41 GiB / 42 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 327 op/s Oct 14 06:12:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 281 MiB data, 938 MiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 266 op/s Oct 14 06:12:17 localhost nova_compute[295778]: 2025-10-14 10:12:17.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:18 localhost nova_compute[295778]: 2025-10-14 10:12:18.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:18 localhost nova_compute[295778]: 2025-10-14 10:12:18.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:18.499 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:18.500 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:12:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e96 do_prune osdmap full prune enabled Oct 14 06:12:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e97 e97: 6 total, 6 up, 6 in Oct 14 06:12:18 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e97: 6 total, 6 up, 6 in Oct 14 06:12:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 281 MiB data, 938 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.0 MiB/s wr, 146 op/s Oct 14 06:12:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:19.503 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.545272) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739545327, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 257, "total_data_size": 1441584, "memory_usage": 1471824, "flush_reason": "Manual Compaction"} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739557957, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1408041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24222, "largest_seqno": 25836, "table_properties": {"data_size": 1401362, "index_size": 3765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14770, "raw_average_key_size": 20, "raw_value_size": 1387502, "raw_average_value_size": 1895, "num_data_blocks": 166, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436619, "oldest_key_time": 1760436619, "file_creation_time": 1760436739, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12728 microseconds, and 4996 cpu microseconds. Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.558006) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1408041 bytes OK Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.558031) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.561205) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.561228) EVENT_LOG_v1 {"time_micros": 1760436739561221, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.561255) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1434546, prev total WAL file size 1435036, number of live WAL files 2. Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.562032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303134' seq:72057594037927935, type:22 .. '6C6F676D0034323635' seq:0, type:0; will stop at (end) Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1375KB)], [42(17MB)] Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739562103, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 19859167, "oldest_snapshot_seqno": -1} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12483 keys, 19721118 bytes, temperature: kUnknown Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739687630, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 19721118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19647985, "index_size": 40840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31237, "raw_key_size": 336795, "raw_average_key_size": 26, "raw_value_size": 19433306, "raw_average_value_size": 1556, "num_data_blocks": 1545, "num_entries": 12483, "num_filter_entries": 12483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436739, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.688057) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 19721118 bytes Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.690053) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 17.6 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(28.1) write-amplify(14.0) OK, records in: 13017, records dropped: 534 output_compression: NoCompression Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.690086) EVENT_LOG_v1 {"time_micros": 1760436739690071, "job": 24, "event": "compaction_finished", "compaction_time_micros": 125680, "compaction_time_cpu_micros": 52781, "output_level": 6, "num_output_files": 1, "total_output_size": 19721118, "num_input_records": 13017, "num_output_records": 12483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739690485, "job": 24, "event": "table_file_deletion", "file_number": 44} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436739693380, "job": 24, "event": "table_file_deletion", "file_number": 42} Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.561897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.693415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.693422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.693425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.693428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:19.693431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.937 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.938 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.938 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.938 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:12:19 localhost nova_compute[295778]: 2025-10-14 10:12:19.939 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.294 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.295 2 INFO nova.compute.manager [-] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] VM Stopped (Lifecycle Event)#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.321 2 DEBUG nova.compute.manager [None req-ad7b6667-2aba-4820-b079-f9f8b4a86279 - - - - - -] [instance: cc1adead-5ea6-42fa-9c12-f4d35462f1a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.401 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.572 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.573 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11486MB free_disk=41.71338653564453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.574 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.574 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:12:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:12:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:12:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:12:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:12:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:12:20 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 0b0c4ec0-2361-47ec-a3b1-e22928ece64c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:12:20 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 0b0c4ec0-2361-47ec-a3b1-e22928ece64c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:12:20 localhost ceph-mgr[300442]: [progress INFO root] Completed event 0b0c4ec0-2361-47ec-a3b1-e22928ece64c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:12:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:12:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:12:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:12:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.941 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:12:20 localhost nova_compute[295778]: 2025-10-14 10:12:20.942 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:12:21 localhost podman[325450]: 2025-10-14 10:12:21.016605395 +0000 UTC m=+0.096943618 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 14 06:12:21 localhost podman[325449]: 2025-10-14 10:12:21.065529179 +0000 UTC m=+0.150968308 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 14 06:12:21 localhost podman[325450]: 2025-10-14 10:12:21.090120121 +0000 UTC m=+0.170458344 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:12:21 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:12:21 localhost podman[325449]: 2025-10-14 10:12:21.106369881 +0000 UTC m=+0.191809000 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible) Oct 14 06:12:21 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:12:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 5.8 MiB/s wr, 281 op/s Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.462 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:21 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:12:21 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:12:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:12:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2392745221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.894 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.901 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.918 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.947 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:12:21 localhost nova_compute[295778]: 2025-10-14 10:12:21.947 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:22 localhost nova_compute[295778]: 2025-10-14 10:12:22.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 5.4 MiB/s wr, 262 op/s Oct 14 06:12:23 localhost nova_compute[295778]: 2025-10-14 10:12:23.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:12:24 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:12:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:12:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:12:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e97 do_prune osdmap full prune enabled Oct 14 06:12:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e98 e98: 6 total, 6 up, 6 in Oct 14 06:12:24 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e98: 6 total, 6 up, 6 in Oct 14 06:12:24 localhost nova_compute[295778]: 2025-10-14 10:12:24.949 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:24 localhost nova_compute[295778]: 2025-10-14 10:12:24.949 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:24 localhost nova_compute[295778]: 2025-10-14 10:12:24.950 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:12:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 744 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 813 KiB/s wr, 196 op/s Oct 14 06:12:25 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:12:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:25.392 2 INFO neutron.agent.securitygroups_rpc [None req-5fb6e895-0b72-4fd9-afb2-e468fd4c9d8e a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Security group rule updated ['c2c1552c-9248-46c1-8391-9c390debaa3c']#033[00m Oct 14 06:12:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:25.580 2 INFO neutron.agent.securitygroups_rpc [None req-bf8acc07-8506-4dbe-a875-499669ac567e a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Security group rule updated ['c2c1552c-9248-46c1-8391-9c390debaa3c']#033[00m Oct 14 06:12:25 localhost nova_compute[295778]: 2025-10-14 10:12:25.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:25 localhost nova_compute[295778]: 2025-10-14 10:12:25.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:12:26 localhost podman[325509]: 2025-10-14 10:12:26.551974625 +0000 UTC m=+0.090638591 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, release=1755695350, vcs-type=git, name=ubi9-minimal) Oct 14 06:12:26 localhost podman[325510]: 2025-10-14 10:12:26.604951308 +0000 UTC m=+0.141277612 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller) Oct 14 06:12:26 localhost podman[325509]: 2025-10-14 10:12:26.673303068 +0000 UTC m=+0.211967024 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git) Oct 14 06:12:26 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:12:26 localhost podman[325511]: 2025-10-14 10:12:26.69343964 +0000 UTC m=+0.227126704 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:12:26 localhost podman[325510]: 2025-10-14 10:12:26.710219855 +0000 UTC m=+0.246546149 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:12:26 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:12:26 localhost podman[325511]: 2025-10-14 10:12:26.761630736 +0000 UTC m=+0.295317840 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:12:26 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:12:26 localhost nova_compute[295778]: 2025-10-14 10:12:26.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 744 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 766 KiB/s wr, 185 op/s Oct 14 06:12:27 localhost nova_compute[295778]: 2025-10-14 10:12:27.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:27 localhost nova_compute[295778]: 2025-10-14 10:12:27.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.242 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.243 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.264 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.720 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.721 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.726 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.727 2 INFO nova.compute.claims [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Claim successful on node np0005486731.localdomain#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.870 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.919 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.920 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:12:28 localhost nova_compute[295778]: 2025-10-14 10:12:28.986 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:12:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 145 MiB data, 744 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 651 KiB/s wr, 157 op/s Oct 14 06:12:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:12:29 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1921320355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.337 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.343 2 DEBUG nova.compute.provider_tree [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.359 2 DEBUG nova.scheduler.client.report [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.385 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.386 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.447 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.448 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.462 2 INFO nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.482 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.536 2 DEBUG nova.policy [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Oct 14 06:12:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.596 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.598 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.598 2 INFO nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Creating image(s)#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.632 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.666 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.700 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.706 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.784 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.785 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.786 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.787 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "bdde6caf5564ec49ea0a13ddf42a7463db9906e5" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.819 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:29 localhost nova_compute[295778]: 2025-10-14 10:12:29.823 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 9d663561-9fd7-4dea-b31c-23b820127bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:29 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:29.967 2 INFO neutron.agent.securitygroups_rpc [req-71942470-9079-4931-bb18-878b256d4354 req-27095b53-a69d-4785-aaad-da6bebb4cf09 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Security group member updated ['c2c1552c-9248-46c1-8391-9c390debaa3c']#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.177 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Successfully created port: f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.291 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bdde6caf5564ec49ea0a13ddf42a7463db9906e5 9d663561-9fd7-4dea-b31c-23b820127bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.386 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] resizing rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.544 2 DEBUG nova.objects.instance [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lazy-loading 'migration_context' on Instance uuid 9d663561-9fd7-4dea-b31c-23b820127bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.563 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.563 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Ensure instance console log exists: /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.564 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.565 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.566 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:30 localhost podman[246584]: time="2025-10-14T10:12:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:12:30 localhost podman[246584]: @ - - [14/Oct/2025:10:12:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:12:30 localhost podman[246584]: @ - - [14/Oct/2025:10:12:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18881 "" "Go-http-client/1.1" Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.832 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Successfully updated port: f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.848 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.849 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquired lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.850 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.872 2 DEBUG nova.compute.manager [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-changed-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.873 2 DEBUG nova.compute.manager [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Refreshing instance network info cache due to event network-changed-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.874 2 DEBUG oslo_concurrency.lockutils [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.897 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 14 06:12:30 localhost nova_compute[295778]: 2025-10-14 10:12:30.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.101 2 DEBUG nova.network.neutron [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updating instance_info_cache with network_info: [{"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.122 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Releasing lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.123 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Instance network_info: |[{"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.123 2 DEBUG oslo_concurrency.lockutils [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.124 2 DEBUG nova.network.neutron [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Refreshing network info cache for port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.129 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Start _get_guest_xml network_info=[{"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.135 2 WARNING nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.143 2 DEBUG nova.virt.libvirt.host [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.144 2 DEBUG nova.virt.libvirt.host [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 14 06:12:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 163 MiB data, 736 MiB used, 41 GiB / 42 GiB avail; 557 KiB/s rd, 294 KiB/s wr, 61 op/s Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.146 2 DEBUG nova.virt.libvirt.host [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Searching host: 'np0005486731.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.148 2 DEBUG nova.virt.libvirt.host [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.148 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.149 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T10:09:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3d2e2556-398d-47fa-b582-04a393026796',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T10:09:39Z,direct_url=,disk_format='qcow2',id=4d7273e1-0c4b-46b6-bdfa-9a43be3f063a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='41187b090f3d4818a32baa37ce8a3991',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-14T10:09:41Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.149 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.150 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.150 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.150 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.151 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.151 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.151 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.152 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.152 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.152 2 DEBUG nova.virt.hardware [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.156 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e98 do_prune osdmap full prune enabled Oct 14 06:12:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e99 e99: 6 total, 6 up, 6 in Oct 14 06:12:31 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e99: 6 total, 6 up, 6 in Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.457 2 DEBUG nova.network.neutron [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updated VIF entry in instance network info cache for port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.458 2 DEBUG nova.network.neutron [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updating instance_info_cache with network_info: [{"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.481 2 DEBUG oslo_concurrency.lockutils [req-e5c86cc1-4744-466c-ae95-cf7e3c5f6626 req-971a5c38-fdd3-4c2e-89c7-4ea541405016 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:12:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:12:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4118846607' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.623 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.660 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:31 localhost nova_compute[295778]: 2025-10-14 10:12:31.665 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:12:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/538028085' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.063 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.066 2 DEBUG nova.virt.libvirt.vif [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKRJa9flztUgTwnCl6PH+7wHHPjSI4E3ULd1AG6dlMpg0WFpMu8RmKybuAiNsf1DpcVzMtzORE22LeYcNeKsaszS3kKYeZVHRdc9csLSo0YNcaV5/5KSNFNcDAXaDqSfww==',key_name='tempest-keypair-1468241715',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67facb686b1a45e4af5a7329836978ce',ramdisk_id='',reservation_id='r-hychsdrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-359728251',owner_user_name='tempest-ServersV294TestFqdnHostnames-359728251-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:12:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a5c8b032521c4660a9f50471da931c3a',uuid=9d663561-9fd7-4dea-b31c-23b820127bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.066 2 DEBUG nova.network.os_vif_util [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converting VIF {"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.068 2 DEBUG nova.network.os_vif_util [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.070 2 DEBUG nova.objects.instance [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d663561-9fd7-4dea-b31c-23b820127bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.087 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] End _get_guest_xml xml= Oct 14 06:12:32 localhost nova_compute[295778]: 9d663561-9fd7-4dea-b31c-23b820127bbe Oct 14 06:12:32 localhost nova_compute[295778]: instance-00000009 Oct 14 06:12:32 localhost nova_compute[295778]: 131072 Oct 14 06:12:32 localhost nova_compute[295778]: 1 Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: guest-instance-1 Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:31 Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: 128 Oct 14 06:12:32 localhost nova_compute[295778]: 1 Oct 14 06:12:32 localhost nova_compute[295778]: 0 Oct 14 06:12:32 localhost nova_compute[295778]: 0 Oct 14 06:12:32 localhost nova_compute[295778]: 1 Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: tempest-ServersV294TestFqdnHostnames-359728251-project-member Oct 14 06:12:32 localhost nova_compute[295778]: tempest-ServersV294TestFqdnHostnames-359728251 Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: RDO Oct 14 06:12:32 localhost nova_compute[295778]: OpenStack Compute Oct 14 06:12:32 localhost nova_compute[295778]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 14 06:12:32 localhost nova_compute[295778]: 9d663561-9fd7-4dea-b31c-23b820127bbe Oct 14 06:12:32 localhost nova_compute[295778]: 9d663561-9fd7-4dea-b31c-23b820127bbe Oct 14 06:12:32 localhost nova_compute[295778]: Virtual Machine Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: hvm Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: /dev/urandom Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: Oct 14 06:12:32 localhost nova_compute[295778]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.087 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Preparing to wait for external event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.088 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.088 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.089 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.090 2 DEBUG nova.virt.libvirt.vif [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKRJa9flztUgTwnCl6PH+7wHHPjSI4E3ULd1AG6dlMpg0WFpMu8RmKybuAiNsf1DpcVzMtzORE22LeYcNeKsaszS3kKYeZVHRdc9csLSo0YNcaV5/5KSNFNcDAXaDqSfww==',key_name='tempest-keypair-1468241715',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67facb686b1a45e4af5a7329836978ce',ramdisk_id='',reservation_id='r-hychsdrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-359728251',owner_user_name='tempest-ServersV294TestFqdnHostnames-359728251-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T10:12:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a5c8b032521c4660a9f50471da931c3a',uuid=9d663561-9fd7-4dea-b31c-23b820127bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.090 2 DEBUG nova.network.os_vif_util [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converting VIF {"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.091 2 DEBUG nova.network.os_vif_util [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.091 2 DEBUG os_vif [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.097 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5a1b7e6-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5a1b7e6-aa, col_values=(('external_ids', {'iface-id': 'f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:4f:a4', 'vm-uuid': '9d663561-9fd7-4dea-b31c-23b820127bbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.108 2 INFO os_vif [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa')#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.162 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.163 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.163 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] No VIF found with MAC fa:16:3e:4a:4f:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.164 2 INFO nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Using config drive#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.198 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.306 2 INFO nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Creating config drive at /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.313 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgdwgr80 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.452 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgdwgr80" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.492 2 DEBUG nova.storage.rbd_utils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] rbd image 9d663561-9fd7-4dea-b31c-23b820127bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.497 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config 9d663561-9fd7-4dea-b31c-23b820127bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.729 2 DEBUG oslo_concurrency.processutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config 9d663561-9fd7-4dea-b31c-23b820127bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.730 2 INFO nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Deleting local config drive /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe/disk.config because it was imported into RBD.#033[00m Oct 14 06:12:32 localhost kernel: device tapf5a1b7e6-aa entered promiscuous mode Oct 14 06:12:32 localhost NetworkManager[5972]: [1760436752.7926] manager: (tapf5a1b7e6-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/25) Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00102|binding|INFO|Claiming lport f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb for this chassis. Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00103|binding|INFO|f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb: Claiming fa:16:3e:4a:4f:a4 10.100.0.6 Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost systemd-udevd[325895]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost NetworkManager[5972]: [1760436752.8184] device (tapf5a1b7e6-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00104|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00105|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00106|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.818 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:4f:a4 10.100.0.6'], port_security=['fa:16:3e:4a:4f:a4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35f103ce-4039-44a2-a9f1-269864e57b47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67facb686b1a45e4af5a7329836978ce', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2c1552c-9248-46c1-8391-9c390debaa3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68dfed75-146b-4653-b2d8-e1bc5ca7cd98, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.820 161932 INFO neutron.agent.ovn.metadata.agent [-] Port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb in datapath 35f103ce-4039-44a2-a9f1-269864e57b47 bound to our chassis#033[00m Oct 14 06:12:32 localhost NetworkManager[5972]: [1760436752.8218] device (tapf5a1b7e6-aa): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.824 161932 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35f103ce-4039-44a2-a9f1-269864e57b47#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost systemd-machined[205044]: New machine qemu-5-instance-00000009. Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.837 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[536884b9-1962-4f5a-8213-fc4484c4ac49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.838 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35f103ce-41 in ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.841 320313 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35f103ce-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.841 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f6072cea-d87a-4183-8264-003b26166bcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.843 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[32edd6ce-415b-44ae-ba78-3ec1b2f97ea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost systemd[1]: Started Virtual Machine qemu-5-instance-00000009. Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.854 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[c08c8a65-8612-493e-af85-a965b98813c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.868 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[06d16866-5f39-415d-976a-f447705bc129]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00107|binding|INFO|Setting lport f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb ovn-installed in OVS Oct 14 06:12:32 localhost ovn_controller[156286]: 2025-10-14T10:12:32Z|00108|binding|INFO|Setting lport f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb up in Southbound Oct 14 06:12:32 localhost nova_compute[295778]: 2025-10-14 10:12:32.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.900 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[39073362-cdf4-4698-bdb5-7591a8780086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost NetworkManager[5972]: [1760436752.9092] manager: (tap35f103ce-40): new Veth device (/org/freedesktop/NetworkManager/Devices/26) Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.910 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[26be3e27-d935-42c7-825c-5ee39df92674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.941 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[1305f24c-f059-4b02-9453-84db1ee151e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.945 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[dc468442-1dc8-47f9-9b8e-cfd15bc8a14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:32 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap35f103ce-41: link becomes ready Oct 14 06:12:32 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap35f103ce-40: link becomes ready Oct 14 06:12:32 localhost NetworkManager[5972]: [1760436752.9739] device (tap35f103ce-40): carrier: link connected Oct 14 06:12:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:32.979 322681 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b79713-202f-4507-a491-08772ae58102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.000 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c50575ba-5857-4830-b9f9-498351c94a7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35f103ce-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c9:2d:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1284007, 'reachable_time': 18954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325947, 'error': None, 'target': 'ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.019 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbbd847-e3ce-4918-900e-bfafa61554dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:2d3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1284007, 'tstamp': 1284007}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325951, 'error': None, 'target': 'ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.030 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[47fab6e4-31a4-4fe1-b279-42d22d855697]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35f103ce-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c9:2d:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1284007, 'reachable_time': 18954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325952, 'error': None, 'target': 'ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.061 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a5aebbf4-bb4e-4ced-b47d-1b199746d024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.090 2 DEBUG nova.compute.manager [req-3d4917d1-4ac1-440d-8132-38f144b108e1 req-bedc1e15-7ee1-4b1e-9436-be0e93e4a762 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.090 2 DEBUG oslo_concurrency.lockutils [req-3d4917d1-4ac1-440d-8132-38f144b108e1 req-bedc1e15-7ee1-4b1e-9436-be0e93e4a762 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.091 2 DEBUG oslo_concurrency.lockutils [req-3d4917d1-4ac1-440d-8132-38f144b108e1 req-bedc1e15-7ee1-4b1e-9436-be0e93e4a762 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.091 2 DEBUG oslo_concurrency.lockutils [req-3d4917d1-4ac1-440d-8132-38f144b108e1 req-bedc1e15-7ee1-4b1e-9436-be0e93e4a762 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.092 2 DEBUG nova.compute.manager [req-3d4917d1-4ac1-440d-8132-38f144b108e1 req-bedc1e15-7ee1-4b1e-9436-be0e93e4a762 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Processing event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.119 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[78820dd6-d23c-45c6-aac4-269cfae29ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.121 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f103ce-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.121 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.122 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f103ce-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:33 localhost kernel: device tap35f103ce-40 entered promiscuous mode Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.128 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35f103ce-40, col_values=(('external_ids', {'iface-id': '42f114a4-f4db-4901-9f3a-f5496e6a4392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:33 localhost ovn_controller[156286]: 2025-10-14T10:12:33Z|00109|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.145 161932 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35f103ce-4039-44a2-a9f1-269864e57b47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35f103ce-4039-44a2-a9f1-269864e57b47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.146 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[450d6364-4342-400c-9a74-eb2d9960b0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.147 161932 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: global Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: log /dev/log local0 debug Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: log-tag haproxy-metadata-proxy-35f103ce-4039-44a2-a9f1-269864e57b47 Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: user root Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: group root Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: maxconn 1024 Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: pidfile /var/lib/neutron/external/pids/35f103ce-4039-44a2-a9f1-269864e57b47.pid.haproxy Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: daemon Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: defaults Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: log global Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: mode http Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: option httplog Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: option dontlognull Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: option http-server-close Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: option forwardfor Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: retries 3 Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: timeout http-request 30s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: timeout connect 30s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: timeout client 32s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: timeout server 32s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: timeout http-keep-alive 30s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: listen listener Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: bind 169.254.169.254:80 Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: server metadata /var/lib/neutron/metadata_proxy Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: http-request add-header X-OVN-Network-ID 35f103ce-4039-44a2-a9f1-269864e57b47 Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 14 06:12:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 163 MiB data, 736 MiB used, 41 GiB / 42 GiB avail; 648 KiB/s rd, 342 KiB/s wr, 71 op/s Oct 14 06:12:33 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:33.148 161932 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47', 'env', 'PROCESS_TAG=haproxy-35f103ce-4039-44a2-a9f1-269864e57b47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35f103ce-4039-44a2-a9f1-269864e57b47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 14 06:12:33 localhost openstack_network_exporter[248748]: ERROR 10:12:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:12:33 localhost openstack_network_exporter[248748]: ERROR 10:12:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:12:33 localhost openstack_network_exporter[248748]: ERROR 10:12:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:12:33 localhost openstack_network_exporter[248748]: ERROR 10:12:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:12:33 localhost openstack_network_exporter[248748]: Oct 14 06:12:33 localhost openstack_network_exporter[248748]: ERROR 10:12:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:12:33 localhost openstack_network_exporter[248748]: Oct 14 06:12:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e99 do_prune osdmap full prune enabled Oct 14 06:12:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e100 e100: 6 total, 6 up, 6 in Oct 14 06:12:33 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e100: 6 total, 6 up, 6 in Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.613 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.614 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] VM Started (Lifecycle Event)#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.617 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.621 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.629 2 INFO nova.virt.libvirt.driver [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Instance spawned successfully.#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.629 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.633 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.636 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:12:33 localhost podman[326009]: Oct 14 06:12:33 localhost podman[326009]: 2025-10-14 10:12:33.653424101 +0000 UTC m=+0.091089612 container create 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.656 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.657 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.657 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] VM Paused (Lifecycle Event)#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.665 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.666 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.667 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.668 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.668 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.669 2 DEBUG nova.virt.libvirt.driver [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.684 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.687 2 DEBUG nova.virt.driver [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.688 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] VM Resumed (Lifecycle Event)#033[00m Oct 14 06:12:33 localhost systemd[1]: Started libpod-conmon-2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4.scope. Oct 14 06:12:33 localhost podman[326009]: 2025-10-14 10:12:33.610039863 +0000 UTC m=+0.047705354 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 14 06:12:33 localhost systemd[1]: Started libcrun container. Oct 14 06:12:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8877213d766d2b9ee6b48536185ba6f5d1bc06a65668aca90f6247bfee032a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.723 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.728 2 DEBUG nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 14 06:12:33 localhost podman[326009]: 2025-10-14 10:12:33.735798261 +0000 UTC m=+0.173463862 container init 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:12:33 localhost podman[326009]: 2025-10-14 10:12:33.745156839 +0000 UTC m=+0.182822340 container start 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.759 2 INFO nova.compute.manager [None req-727d3e03-5808-4df4-889a-251c511937f2 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.776 2 INFO nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Took 4.18 seconds to spawn the instance on the hypervisor.#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.777 2 DEBUG nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:12:33 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [NOTICE] (326027) : New worker (326029) forked Oct 14 06:12:33 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [NOTICE] (326027) : Loading success. Oct 14 06:12:33 localhost ovn_controller[156286]: 2025-10-14T10:12:33Z|00110|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.879 2 INFO nova.compute.manager [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Took 5.56 seconds to build instance.#033[00m Oct 14 06:12:33 localhost nova_compute[295778]: 2025-10-14 10:12:33.897 2 DEBUG oslo_concurrency.lockutils [None req-71942470-9079-4931-bb18-878b256d4354 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 5.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e100 do_prune osdmap full prune enabled Oct 14 06:12:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e101 e101: 6 total, 6 up, 6 in Oct 14 06:12:34 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e101: 6 total, 6 up, 6 in Oct 14 06:12:34 localhost ovn_controller[156286]: 2025-10-14T10:12:34Z|00111|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:34 localhost nova_compute[295778]: 2025-10-14 10:12:34.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:34 localhost ovn_controller[156286]: 2025-10-14T10:12:34Z|00112|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:34 localhost nova_compute[295778]: 2025-10-14 10:12:34.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:34 localhost ovn_controller[156286]: 2025-10-14T10:12:34Z|00113|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:34 localhost nova_compute[295778]: 2025-10-14 10:12:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 192 MiB data, 805 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 209 op/s Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.159 2 DEBUG nova.compute.manager [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.159 2 DEBUG oslo_concurrency.lockutils [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.159 2 DEBUG oslo_concurrency.lockutils [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.159 2 DEBUG oslo_concurrency.lockutils [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.160 2 DEBUG nova.compute.manager [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] No waiting events found dispatching network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.160 2 WARNING nova.compute.manager [req-f043203f-f945-424a-b002-680c8f6c28ec req-c962d13d-e3fc-49e8-80b3-f4be58935f6b da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received unexpected event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb for instance with vm_state active and task_state None.#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:35 localhost ovn_controller[156286]: 2025-10-14T10:12:35Z|00114|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:12:35 localhost nova_compute[295778]: 2025-10-14 10:12:35.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 192 MiB data, 805 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 3.1 MiB/s wr, 189 op/s Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.224 2 DEBUG nova.compute.manager [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-changed-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.225 2 DEBUG nova.compute.manager [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Refreshing instance network info cache due to event network-changed-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.225 2 DEBUG oslo_concurrency.lockutils [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.225 2 DEBUG oslo_concurrency.lockutils [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquired lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.226 2 DEBUG nova.network.neutron [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Refreshing network info cache for port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.696 2 DEBUG nova.network.neutron [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updated VIF entry in instance network info cache for port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.699 2 DEBUG nova.network.neutron [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updating instance_info_cache with network_info: [{"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:37 localhost nova_compute[295778]: 2025-10-14 10:12:37.717 2 DEBUG oslo_concurrency.lockutils [req-fa33fe59-eeaf-4d6c-bf21-d1fcd0e29860 req-b66183c0-9b7b-4d10-914a-d9a332149d15 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Releasing lock "refresh_cache-9d663561-9fd7-4dea-b31c-23b820127bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 14 06:12:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e101 do_prune osdmap full prune enabled Oct 14 06:12:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e102 e102: 6 total, 6 up, 6 in Oct 14 06:12:38 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e102: 6 total, 6 up, 6 in Oct 14 06:12:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 192 MiB data, 805 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 3.1 MiB/s wr, 189 op/s Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:12:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:12:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e102 do_prune osdmap full prune enabled Oct 14 06:12:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e103 e103: 6 total, 6 up, 6 in Oct 14 06:12:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e103: 6 total, 6 up, 6 in Oct 14 06:12:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e103 do_prune osdmap full prune enabled Oct 14 06:12:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e104 e104: 6 total, 6 up, 6 in Oct 14 06:12:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e104: 6 total, 6 up, 6 in Oct 14 06:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:12:40 localhost systemd[1]: tmp-crun.rC5ovM.mount: Deactivated successfully. Oct 14 06:12:40 localhost podman[326039]: 2025-10-14 10:12:40.573054598 +0000 UTC m=+0.108042752 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:12:40 localhost podman[326039]: 2025-10-14 10:12:40.616234041 +0000 UTC m=+0.151222235 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:12:40 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:12:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 7.7 KiB/s wr, 201 op/s Oct 14 06:12:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e104 do_prune osdmap full prune enabled Oct 14 06:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:12:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8765 writes, 36K keys, 8765 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8765 writes, 2034 syncs, 4.31 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3017 writes, 10K keys, 3017 commit groups, 1.0 writes per commit group, ingest: 11.15 MB, 0.02 MB/s#012Interval WAL: 3017 writes, 1283 syncs, 2.35 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:12:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e105 e105: 6 total, 6 up, 6 in Oct 14 06:12:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e105: 6 total, 6 up, 6 in Oct 14 06:12:42 localhost nova_compute[295778]: 2025-10-14 10:12:42.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e105 do_prune osdmap full prune enabled Oct 14 06:12:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e106 e106: 6 total, 6 up, 6 in Oct 14 06:12:42 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e106: 6 total, 6 up, 6 in Oct 14 06:12:42 localhost nova_compute[295778]: 2025-10-14 10:12:42.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 11 KiB/s wr, 302 op/s Oct 14 06:12:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e106 do_prune osdmap full prune enabled Oct 14 06:12:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e107 e107: 6 total, 6 up, 6 in Oct 14 06:12:44 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e107: 6 total, 6 up, 6 in Oct 14 06:12:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 13 KiB/s wr, 307 op/s Oct 14 06:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:12:45 localhost podman[326059]: 2025-10-14 10:12:45.62423975 +0000 UTC m=+0.120380978 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:12:45 localhost podman[326059]: 2025-10-14 10:12:45.660712645 +0000 UTC m=+0.156853863 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:12:45 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:12:45 localhost podman[326058]: 2025-10-14 10:12:45.712840496 +0000 UTC m=+0.211945843 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:12:45 localhost podman[326058]: 2025-10-14 10:12:45.724066813 +0000 UTC m=+0.223172090 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009) Oct 14 06:12:45 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:12:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 7165 writes, 30K keys, 7165 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 7165 writes, 1639 syncs, 4.37 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2190 writes, 8530 keys, 2190 commit groups, 1.0 writes per commit group, ingest: 10.09 MB, 0.02 MB/s#012Interval WAL: 2190 writes, 923 syncs, 2.37 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.7 KiB/s wr, 84 op/s Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:47.768 270389 INFO neutron.agent.linux.ip_lib [None req-b406138b-49ec-4ec9-a80e-daec09c4007b - - - - - -] Device tapc7fd7e94-cd cannot be used as it has no MAC address#033[00m Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost kernel: device tapc7fd7e94-cd entered promiscuous mode Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost NetworkManager[5972]: [1760436767.8416] manager: (tapc7fd7e94-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00115|binding|INFO|Claiming lport c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c for this chassis. Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00116|binding|INFO|c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c: Claiming unknown Oct 14 06:12:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:47.854 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-77037dfd-d1e0-4c52-b2d1-08dfead9ed93', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77037dfd-d1e0-4c52-b2d1-08dfead9ed93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c2d838c66c4141a3a77483b40ab737', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46e6e16b-a46e-4716-84bd-5e74736016b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:47.856 161932 INFO neutron.agent.ovn.metadata.agent [-] Port c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c in datapath 77037dfd-d1e0-4c52-b2d1-08dfead9ed93 bound to our chassis#033[00m Oct 14 06:12:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:47.859 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9f580285-f949-4fbc-883b-b426949f268b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:12:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:47.859 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:12:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:47.860 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[33d4e643-1c3b-444f-9779-2c40ca1183e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00117|binding|INFO|Setting lport c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c ovn-installed in OVS Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00118|binding|INFO|Setting lport c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c up in Southbound Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:4f:a4 10.100.0.6 Oct 14 06:12:47 localhost ovn_controller[156286]: 2025-10-14T10:12:47Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:4f:a4 10.100.0.6 Oct 14 06:12:47 localhost nova_compute[295778]: 2025-10-14 10:12:47.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:48 localhost nova_compute[295778]: 2025-10-14 10:12:48.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:12:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2770095470' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:12:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:12:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2770095470' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:12:48 localhost podman[326169]: Oct 14 06:12:48 localhost podman[326169]: 2025-10-14 10:12:48.781011467 +0000 UTC m=+0.084049126 container create 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:12:48 localhost systemd[1]: Started libpod-conmon-2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81.scope. Oct 14 06:12:48 localhost systemd[1]: Started libcrun container. Oct 14 06:12:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b457199e75ad73cdd78b1415b8f1bf16327e6560faf24c0237ceae852dffcbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:12:48 localhost podman[326169]: 2025-10-14 10:12:48.742585619 +0000 UTC m=+0.045623308 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:12:48 localhost podman[326169]: 2025-10-14 10:12:48.849161291 +0000 UTC m=+0.152198960 container init 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:12:48 localhost podman[326169]: 2025-10-14 10:12:48.862153085 +0000 UTC m=+0.165190754 container start 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:12:48 localhost dnsmasq[326187]: started, version 2.85 cachesize 150 Oct 14 06:12:48 localhost dnsmasq[326187]: DNS service limited to local subnets Oct 14 06:12:48 localhost dnsmasq[326187]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:12:48 localhost dnsmasq[326187]: warning: no upstream servers configured Oct 14 06:12:48 localhost dnsmasq-dhcp[326187]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:12:48 localhost dnsmasq[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/addn_hosts - 0 addresses Oct 14 06:12:48 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/host Oct 14 06:12:48 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/opts Oct 14 06:12:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:49.010 270389 INFO neutron.agent.dhcp.agent [None req-7bcbbfb1-4880-427a-a9ca-7bad6061a450 - - - - - -] DHCP configuration for ports {'2f1614e6-16df-46d3-a029-a844f7c77850'} is completed#033[00m Oct 14 06:12:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 3.7 KiB/s wr, 66 op/s Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.249194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769249274, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 719, "num_deletes": 255, "total_data_size": 567326, "memory_usage": 580568, "flush_reason": "Manual Compaction"} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769254662, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 556409, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25837, "largest_seqno": 26555, "table_properties": {"data_size": 552776, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9140, "raw_average_key_size": 20, "raw_value_size": 545236, "raw_average_value_size": 1244, "num_data_blocks": 62, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436739, "oldest_key_time": 1760436739, "file_creation_time": 1760436769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5522 microseconds, and 2584 cpu microseconds. Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.254708) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 556409 bytes OK Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.254763) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.257904) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.257927) EVENT_LOG_v1 {"time_micros": 1760436769257920, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.257948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 563526, prev total WAL file size 563526, number of live WAL files 2. Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.258544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(543KB)], [45(18MB)] Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769258604, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 20277527, "oldest_snapshot_seqno": -1} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12393 keys, 16999308 bytes, temperature: kUnknown Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769369556, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 16999308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16928391, "index_size": 38816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 335556, "raw_average_key_size": 27, "raw_value_size": 16716819, "raw_average_value_size": 1348, "num_data_blocks": 1456, "num_entries": 12393, "num_filter_entries": 12393, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.369961) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 16999308 bytes Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.373643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 18.8 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(67.0) write-amplify(30.6) OK, records in: 12921, records dropped: 528 output_compression: NoCompression Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.373674) EVENT_LOG_v1 {"time_micros": 1760436769373661, "job": 26, "event": "compaction_finished", "compaction_time_micros": 111031, "compaction_time_cpu_micros": 49835, "output_level": 6, "num_output_files": 1, "total_output_size": 16999308, "num_input_records": 12921, "num_output_records": 12393, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769373957, "job": 26, "event": "table_file_deletion", "file_number": 47} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436769377796, "job": 26, "event": "table_file_deletion", "file_number": 45} Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.258465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.377943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.377956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.377960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.377964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:12:49.377973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:12:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e107 do_prune osdmap full prune enabled Oct 14 06:12:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e108 e108: 6 total, 6 up, 6 in Oct 14 06:12:49 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e108: 6 total, 6 up, 6 in Oct 14 06:12:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:49.979 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:49Z, description=, device_id=cd458f74-59aa-4484-a529-3365c9369c99, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=780cdc16-6535-45a4-83b6-c7aed06313ef, ip_allocation=immediate, mac_address=fa:16:3e:69:0c:cd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:12:45Z, description=, dns_domain=, id=77037dfd-d1e0-4c52-b2d1-08dfead9ed93, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1652557293-network, port_security_enabled=True, project_id=13c2d838c66c4141a3a77483b40ab737, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29905, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=835, status=ACTIVE, subnets=['fd45d254-8ba5-4ade-9397-3b27e598df2c'], tags=[], tenant_id=13c2d838c66c4141a3a77483b40ab737, updated_at=2025-10-14T10:12:46Z, vlan_transparent=None, network_id=77037dfd-d1e0-4c52-b2d1-08dfead9ed93, port_security_enabled=False, project_id=13c2d838c66c4141a3a77483b40ab737, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=865, status=DOWN, tags=[], tenant_id=13c2d838c66c4141a3a77483b40ab737, updated_at=2025-10-14T10:12:49Z on network 77037dfd-d1e0-4c52-b2d1-08dfead9ed93#033[00m Oct 14 06:12:50 localhost dnsmasq[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/addn_hosts - 1 addresses Oct 14 06:12:50 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/host Oct 14 06:12:50 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/opts Oct 14 06:12:50 localhost podman[326205]: 2025-10-14 10:12:50.211464739 +0000 UTC m=+0.065384732 container kill 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.406 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}267cef43f4b200374895b3b7c8950c1e96ced747f196a4ef5630af3d172afaad" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 14 06:12:50 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:50.450 270389 INFO neutron.agent.dhcp.agent [None req-4afc8b53-3e5d-4ebe-b453-42a9bd309a27 - - - - - -] DHCP configuration for ports {'780cdc16-6535-45a4-83b6-c7aed06313ef'} is completed#033[00m Oct 14 06:12:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e108 do_prune osdmap full prune enabled Oct 14 06:12:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e109 e109: 6 total, 6 up, 6 in Oct 14 06:12:50 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e109: 6 total, 6 up, 6 in Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.660 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Tue, 14 Oct 2025 10:12:50 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cc685028-9b32-4409-84e3-b1ca2ae32633 x-openstack-request-id: req-cc685028-9b32-4409-84e3-b1ca2ae32633 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.660 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "36e4c2a8-ca99-4c45-8719-dd5129265531", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/36e4c2a8-ca99-4c45-8719-dd5129265531"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/36e4c2a8-ca99-4c45-8719-dd5129265531"}]}, {"id": "3d2e2556-398d-47fa-b582-04a393026796", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3d2e2556-398d-47fa-b582-04a393026796"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3d2e2556-398d-47fa-b582-04a393026796"}]}, {"id": "e48e589c-cd63-4323-98a2-3d559dc2261b", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/e48e589c-cd63-4323-98a2-3d559dc2261b"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/e48e589c-cd63-4323-98a2-3d559dc2261b"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.661 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-cc685028-9b32-4409-84e3-b1ca2ae32633 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.664 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/3d2e2556-398d-47fa-b582-04a393026796 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}267cef43f4b200374895b3b7c8950c1e96ced747f196a4ef5630af3d172afaad" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.694 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Tue, 14 Oct 2025 10:12:50 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a27d3be3-b5c4-4682-94e5-f9b644cc4081 x-openstack-request-id: req-a27d3be3-b5c4-4682-94e5-f9b644cc4081 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.695 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "3d2e2556-398d-47fa-b582-04a393026796", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3d2e2556-398d-47fa-b582-04a393026796"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3d2e2556-398d-47fa-b582-04a393026796"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.695 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/3d2e2556-398d-47fa-b582-04a393026796 used request id req-a27d3be3-b5c4-4682-94e5-f9b644cc4081 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.696 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'name': 'guest-instance-1', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'np0005486731.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '67facb686b1a45e4af5a7329836978ce', 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'hostId': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.697 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.712 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.713 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9cd25fca-73b3-4f48-8a68-19297b6e14c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.697964', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '572f192a-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': '8dfb0cb5ed41b445dc4d8058a03a441382bf2bbce84ed6eeed56e202b5a99e74'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.697964', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '572f372a-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': '27594540d71067579ac2f9726024711d24a7674e0aad672d19bebf30550184ff'}]}, 'timestamp': '2025-10-14 10:12:50.714445', '_unique_id': '8b95abcb2fd348649751688ab8e2f02d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.726 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.731 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.736 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9d663561-9fd7-4dea-b31c-23b820127bbe / tapf5a1b7e6-aa inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.737 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b418cd61-f0c3-434f-b490-0fb695d66fc1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.731706', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '5732d150-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': 'fd28da68686c57d67714e9d9d31af7481623642269981f04840f7c20c0051896'}]}, 'timestamp': '2025-10-14 10:12:50.738121', '_unique_id': '42691c69db9a4bc887eabf3a9be39e7b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.739 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.740 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.741 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.741 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.742 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.761 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.bytes volume: 31808000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.762 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '28aa1e6a-cc32-4778-94a3-ecfbc09ecd65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 31808000, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.742261', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '5736832c-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '33d4b262743244497ae709378ec2b45e512945e915f15cb7d27da3437d621761'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 299326, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.742261', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '573697d6-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '2e542484bd02ceb6db466145f09a78d2655d368ab5a71e258f99fa1c61ba00af'}]}, 'timestamp': '2025-10-14 10:12:50.762806', '_unique_id': '0dcc63f7774a4f61ada99d259b35c1e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.763 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.765 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.765 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f010a8ca-2469-414a-a13b-dd5d22bc1971', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 12, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.765627', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '57371daa-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '92e4a73b59a4eef9c7f86adf71784fe5ae2bf52d1e363a6e5bcf31285375bd81'}]}, 'timestamp': '2025-10-14 10:12:50.766215', '_unique_id': 'd3d7392d9649443ab1b15de7e9533414'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.767 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.768 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.769 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2be4e426-69f2-4117-9049-ef6905c8825d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.768948', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '57379e7e-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': 'ce1a15fd88198b330770af1044731122afd31984c35c3111889092ae4dcdced4'}]}, 'timestamp': '2025-10-14 10:12:50.769513', '_unique_id': '3e3af41303904c19ba3e62e39f0ba7cb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.770 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.772 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.772 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bbedd6af-f822-4c03-a9d2-fa84c78e2a81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.772480', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '57382934-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': 'fb3c676195110ab6b7570f75960b10543bd07c9968138968e2aec26e4dce50c9'}]}, 'timestamp': '2025-10-14 10:12:50.773068', '_unique_id': '8b7ee1f036ec48c3805a49fbc232deb5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.773 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.775 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.775 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.776 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d95bda0-2491-4095-aba4-a1fad529af6c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 311, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.775715', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '5738a7c4-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '8dfb36eb0c287cbbf7d24edeca66170d65f49e3fdef8c44d195548782bb234b1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.775715', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '5738bade-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '0724cae7bb49dca694b509dfb65c062ecd5ba4d2af2fd209cff275b36b489610'}]}, 'timestamp': '2025-10-14 10:12:50.776799', '_unique_id': '681d3acfa13540e484f412546a3f5c78'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.777 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.779 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.779 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.latency volume: 18188538961 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.780 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a5a9310f-b790-437f-b0c4-8e8bab618ed7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18188538961, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.779593', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '57393f40-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '8d0262b49a47c3edab17a995b2c2df7d9a3f6f3be16da9f5059d2e54325e4d7d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.779593', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '5739526e-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '21968f8c533d5587f24e27ec64710eca7f563738593e5bcd42e19a2bab233056'}]}, 'timestamp': '2025-10-14 10:12:50.780642', '_unique_id': 'e6fcea4ba5394d8f99638b02746d8b4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.781 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.783 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.783 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.requests volume: 1163 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.784 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1aeface5-e54f-4c0a-bb29-a5d9dcda3617', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1163, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.783438', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '5739d572-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '34bf246e666a16c60e7bfb52409dd6ae9c4bdf5aee2205094c737ad3b56dd471'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 120, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.783438', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '5739e8f0-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '3108062b5fefc08d3fc3930bed585c64cb43052c940af76c6237f967b2c2438b'}]}, 'timestamp': '2025-10-14 10:12:50.784491', '_unique_id': '65742303483c4fbc8328cac8c787220c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.785 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.787 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.787 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.787 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.787 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.806 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/memory.usage volume: 40.4375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1fb13a80-4343-4d3f-ac58-d1c026857391', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 40.4375, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'timestamp': '2025-10-14T10:12:50.788066', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1}, 'message_id': '573d48c4-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.982234538, 'message_signature': 'e1972ada98eaece120d3f631be5e00297e911c31b1ab66c75c86b122e453f050'}]}, 'timestamp': '2025-10-14 10:12:50.806619', '_unique_id': '3d1e26c92b944d4685e36b205e96dd6a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.807 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.809 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.809 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.latency volume: 1178872378 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.810 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.read.latency volume: 109243273 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ebbb4380-4e16-48d8-9e2d-6f0a33a0fa82', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1178872378, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.809457', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '573dd488-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': 'b621d3dfb69d8ebc9dbe8bc054e9c3c79eecbb818af3dd292a582c04c47d9796'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 109243273, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.809457', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '573deafe-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': 'e5a1cedbe24fb2918af0e55cfa60b81a33135a81eb391e8e45539d671270bdb0'}]}, 'timestamp': '2025-10-14 10:12:50.810810', '_unique_id': '0c6306df860c4e4b9902f540c511ac7e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.812 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.813 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.813 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.814 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d0583a5-b79c-4a6f-8a1b-2c6f51886f65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.813848', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '573e75dc-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': 'cfa2c20a78d2575324c9ec3c4dcedc518c075816b5726849fca8eb271c8f3076'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.813848', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '573e881a-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': 'eef3154cb2092d575a02b845da081d8e3de9e7ee3c096e74002935ac6aff8b77'}]}, 'timestamp': '2025-10-14 10:12:50.814815', '_unique_id': '974d9835529746b8803dac2200ae7ba4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.815 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.817 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.817 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.817 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.817 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.817 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6a258c8-69a5-48b0-b472-eccaa5e9f748', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.817921', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '573f15f0-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '0269af02a41948d8ba9cf7dba8cf06991aff869cef766a39d7e0275cd13863bd'}]}, 'timestamp': '2025-10-14 10:12:50.818445', '_unique_id': '345094f3dabd4127833612110d7db6f5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.819 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.820 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.820 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.incoming.bytes volume: 1558 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '571d6ba1-6e5a-4e4d-badc-ac41be5022ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 1558, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.820632', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '573f7f68-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': 'dccf3d58fa73a2e12d83850b545494dc9a97107bac384fffa283c95724b485a5'}]}, 'timestamp': '2025-10-14 10:12:50.821137', '_unique_id': '9008db93457a4b4a977e847cd6fb5b1d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.822 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.823 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.823 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.bytes volume: 72761344 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.823 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '923f3ae8-b85f-4b00-b40b-1781e690a205', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 72761344, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.823291', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '573fe796-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '5d371bafffa36be1b449251c507c903526122f12c75596d952b85a3d228659df'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.823291', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '573ffa38-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.918998714, 'message_signature': '8c38b33decb1a2223b34c8e36837af4f0bd3567573e4fb6f9da921922aeb52a3'}]}, 'timestamp': '2025-10-14 10:12:50.824255', '_unique_id': '09532c0cfb494fd2bc46adcd4f941471'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.825 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.826 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.826 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.outgoing.bytes volume: 1284 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6bd3dff3-fba9-4d5e-9706-c4466728cc24', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 1284, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.826523', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '5740652c-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '1b22fe538073ba2ed4957add555d37526f466c1a6cd06b5435f6e2deca910089'}]}, 'timestamp': '2025-10-14 10:12:50.827021', '_unique_id': 'b4fa48218ae6451bb3948eec26c1a261'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.827 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.829 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.829 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1949c1ca-904b-400f-8524-a5453dba9535', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.829227', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '5740cdaa-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '1436a31494f59aa4484c03c3c2dc62670858d328c5910ab5e4643809c29ea20d'}]}, 'timestamp': '2025-10-14 10:12:50.829692', '_unique_id': 'e423f819e90a40f5bc0882051c11661d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.830 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.831 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.831 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.832 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.832 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.832 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/cpu volume: 12070000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4cbd5d7f-761d-4990-ab90-1904922352d7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12070000000, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'timestamp': '2025-10-14T10:12:50.832468', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '57414c30-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.982234538, 'message_signature': '96b59aa528eadb3d7ffbbf9fae26ec1b38251018a5c1bd88d4f52eb146b91d7c'}]}, 'timestamp': '2025-10-14 10:12:50.832957', '_unique_id': 'a7fa829a23934a7780bbf54bdc7b6805'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.833 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.834 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.835 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.outgoing.packets volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1f608bfd-c68e-4c85-95b0-0f4e4e0fc996', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 10, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.835087', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '5741b27e-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '9b94fa87a27647d9fe98bfb0a3f8312a9ec47126c0920093a49f35de6284f933'}]}, 'timestamp': '2025-10-14 10:12:50.835646', '_unique_id': '9794182db82c45a1ad9881bb8b3530c9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.836 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.837 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.837 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '431ceae2-9a12-40a6-87ad-cf205b968a6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': 'instance-00000009-9d663561-9fd7-4dea-b31c-23b820127bbe-tapf5a1b7e6-aa', 'timestamp': '2025-10-14T10:12:50.837870', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapf5a1b7e6-aa', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4a:4f:a4', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapf5a1b7e6-aa'}, 'message_id': '57421f48-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.908580828, 'message_signature': '6a876d70e3e85d8136fc28b224eff9d8e1b10df9e27081c2223f8419a46a238c'}]}, 'timestamp': '2025-10-14 10:12:50.838335', '_unique_id': 'e92e4ea3808a42aaa131d6c09a63d785'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.839 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.840 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.840 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.841 12 DEBUG ceilometer.compute.pollsters [-] 9d663561-9fd7-4dea-b31c-23b820127bbe/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '21869bc0-c21e-49c5-9a7e-c33a2d16a6bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-vda', 'timestamp': '2025-10-14T10:12:50.840613', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '57428c12-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': '82a2b2f49ca5a1846d7a0264d7a9615236c3517e5d448360192ee025fd46bb2f'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'a5c8b032521c4660a9f50471da931c3a', 'user_name': None, 'project_id': '67facb686b1a45e4af5a7329836978ce', 'project_name': None, 'resource_id': '9d663561-9fd7-4dea-b31c-23b820127bbe-sda', 'timestamp': '2025-10-14T10:12:50.840613', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000009', 'instance_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'instance_type': 'm1.nano', 'host': 'adb975a5c15bc45e34d51c575d7c8d929ba3002ef330502971844b8c', 'instance_host': 'np0005486731.localdomain', 'flavor': {'id': '3d2e2556-398d-47fa-b582-04a393026796', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a'}, 'image_ref': '4d7273e1-0c4b-46b6-bdfa-9a43be3f063a', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '57429da6-a8e6-11f0-8bd6-fa163ec9f0cc', 'monotonic_time': 12857.874805913, 'message_signature': '477636803a60b8d824b920fb82d019b3ad6f844f2a054c417ed14ade2a50e3f6'}]}, 'timestamp': '2025-10-14 10:12:50.841554', '_unique_id': 'ddaefc77aa3a46a2b0f5e7f480ff58b6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging yield Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 14 06:12:50 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:12:50.842 12 ERROR oslo_messaging.notify.messaging Oct 14 06:12:51 localhost nova_compute[295778]: 2025-10-14 10:12:51.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 788 KiB/s rd, 3.9 MiB/s wr, 220 op/s Oct 14 06:12:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:12:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:12:51 localhost podman[326227]: 2025-10-14 10:12:51.551518408 +0000 UTC m=+0.083514582 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:51 localhost podman[326227]: 2025-10-14 10:12:51.564022829 +0000 UTC m=+0.096018983 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:51 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:12:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e109 do_prune osdmap full prune enabled Oct 14 06:12:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e110 e110: 6 total, 6 up, 6 in Oct 14 06:12:51 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e110: 6 total, 6 up, 6 in Oct 14 06:12:51 localhost podman[326226]: 2025-10-14 10:12:51.653799176 +0000 UTC m=+0.189757765 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid) Oct 14 06:12:51 localhost podman[326226]: 2025-10-14 10:12:51.66716203 +0000 UTC m=+0.203120669 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:12:51 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:12:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:51.950 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:49Z, description=, device_id=cd458f74-59aa-4484-a529-3365c9369c99, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=780cdc16-6535-45a4-83b6-c7aed06313ef, ip_allocation=immediate, mac_address=fa:16:3e:69:0c:cd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:12:45Z, description=, dns_domain=, id=77037dfd-d1e0-4c52-b2d1-08dfead9ed93, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1652557293-network, port_security_enabled=True, project_id=13c2d838c66c4141a3a77483b40ab737, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29905, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=835, status=ACTIVE, subnets=['fd45d254-8ba5-4ade-9397-3b27e598df2c'], tags=[], tenant_id=13c2d838c66c4141a3a77483b40ab737, updated_at=2025-10-14T10:12:46Z, vlan_transparent=None, network_id=77037dfd-d1e0-4c52-b2d1-08dfead9ed93, port_security_enabled=False, project_id=13c2d838c66c4141a3a77483b40ab737, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=865, status=DOWN, tags=[], tenant_id=13c2d838c66c4141a3a77483b40ab737, updated_at=2025-10-14T10:12:49Z on network 77037dfd-d1e0-4c52-b2d1-08dfead9ed93#033[00m Oct 14 06:12:52 localhost nova_compute[295778]: 2025-10-14 10:12:52.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:52 localhost dnsmasq[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/addn_hosts - 1 addresses Oct 14 06:12:52 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/host Oct 14 06:12:52 localhost podman[326281]: 2025-10-14 10:12:52.264827053 +0000 UTC m=+0.061969402 container kill 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:12:52 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/opts Oct 14 06:12:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:52.498 270389 INFO neutron.agent.dhcp.agent [None req-5a648f76-ab71-4e2f-a173-dc3680dc96a9 - - - - - -] DHCP configuration for ports {'780cdc16-6535-45a4-83b6-c7aed06313ef'} is completed#033[00m Oct 14 06:12:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e110 do_prune osdmap full prune enabled Oct 14 06:12:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e111 e111: 6 total, 6 up, 6 in Oct 14 06:12:52 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e111: 6 total, 6 up, 6 in Oct 14 06:12:52 localhost nova_compute[295778]: 2025-10-14 10:12:52.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 6.4 MiB/s wr, 235 op/s Oct 14 06:12:53 localhost nova_compute[295778]: 2025-10-14 10:12:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e111 do_prune osdmap full prune enabled Oct 14 06:12:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e112 e112: 6 total, 6 up, 6 in Oct 14 06:12:53 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e112: 6 total, 6 up, 6 in Oct 14 06:12:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e112 do_prune osdmap full prune enabled Oct 14 06:12:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e113 e113: 6 total, 6 up, 6 in Oct 14 06:12:54 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e113: 6 total, 6 up, 6 in Oct 14 06:12:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:55.025 2 INFO neutron.agent.securitygroups_rpc [req-e6959410-d129-480e-a147-8e98f7e24fc8 req-7c458f69-da5c-46ee-8923-ac6067975969 c2de1fcd0fbe455e9592b601274dbbf7 c78e5db87e954cd8b794aa988dac4a81 - - default default] Security group rule updated ['782652d1-5f0f-4241-8596-761d80284e94']#033[00m Oct 14 06:12:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 129 KiB/s rd, 57 KiB/s wr, 187 op/s Oct 14 06:12:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:55.255 2 INFO neutron.agent.securitygroups_rpc [req-f02a1572-fe5a-494f-9113-9ece324eca7c req-2fd4338c-91aa-46fe-843e-9bf1a0f505c0 dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['67264956-a547-41ed-9237-0e5135302f4b']#033[00m Oct 14 06:12:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:55.352 2 INFO neutron.agent.securitygroups_rpc [req-c1c2d624-3b37-4668-ac64-58d3f3ff9e40 req-99ad6fa3-187d-42b9-93c5-2b32fe61d110 c2de1fcd0fbe455e9592b601274dbbf7 c78e5db87e954cd8b794aa988dac4a81 - - default default] Security group rule updated ['782652d1-5f0f-4241-8596-761d80284e94']#033[00m Oct 14 06:12:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:55.681 2 INFO neutron.agent.securitygroups_rpc [req-ab0fd27c-7932-4aec-9556-2a7b53b74d18 req-9906f85f-3384-41bb-8971-24a6a1b941ed dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['bb3ad66c-6201-430c-bad3-8abc33700260']#033[00m Oct 14 06:12:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:56.380 2 INFO neutron.agent.securitygroups_rpc [req-668faa43-879e-4706-8728-13d10bbf4f34 req-37f12eb2-7aab-404b-a60d-ef075536d933 dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['24c17bff-f84d-497f-8b88-2810c752a476']#033[00m Oct 14 06:12:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e113 do_prune osdmap full prune enabled Oct 14 06:12:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e114 e114: 6 total, 6 up, 6 in Oct 14 06:12:56 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e114: 6 total, 6 up, 6 in Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:57.131 2 INFO neutron.agent.securitygroups_rpc [req-1daaa1a3-1076-4377-ab62-c0d0300597ee req-fd1f3398-f227-4660-b3d0-f5475073bf9e dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['b785b874-262d-4aab-b7de-c29b99611a13']#033[00m Oct 14 06:12:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 114 KiB/s rd, 51 KiB/s wr, 165 op/s Oct 14 06:12:57 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:57.306 2 INFO neutron.agent.securitygroups_rpc [None req-74f3ff50-6220-41d6-9c41-09243b4b19a0 e149b330d384449aa335bc66ff84b21a 4c1ab5e91446409bbe9b95f0f44fd3af - - default default] Security group member updated ['c19816b0-9715-42c1-a697-9db8e13e1f7e']#033[00m Oct 14 06:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:12:57 localhost systemd[1]: tmp-crun.WT3dt5.mount: Deactivated successfully. Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.442 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.445 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:12:57 localhost systemd[1]: tmp-crun.zW3NpM.mount: Deactivated successfully. Oct 14 06:12:57 localhost podman[326306]: 2025-10-14 10:12:57.46045081 +0000 UTC m=+0.113522836 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:12:57 localhost podman[326304]: 2025-10-14 10:12:57.422920916 +0000 UTC m=+0.085866375 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64) Oct 14 06:12:57 localhost podman[326305]: 2025-10-14 10:12:57.484539778 +0000 UTC m=+0.144295542 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:12:57 localhost podman[326306]: 2025-10-14 10:12:57.503173161 +0000 UTC m=+0.156245217 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:12:57 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:12:57 localhost podman[326305]: 2025-10-14 10:12:57.549489697 +0000 UTC m=+0.209245391 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2) Oct 14 06:12:57 localhost podman[326304]: 2025-10-14 10:12:57.558450694 +0000 UTC m=+0.221396132 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:12:57 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:12:57 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.638 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.639 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.640 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e114 do_prune osdmap full prune enabled Oct 14 06:12:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e115 e115: 6 total, 6 up, 6 in Oct 14 06:12:57 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e115: 6 total, 6 up, 6 in Oct 14 06:12:57 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:57.806 270389 INFO neutron.agent.linux.ip_lib [None req-ac9ea6e8-b258-4abc-b101-c8b5e8b1ab5e - - - - - -] Device tap35f34486-85 cannot be used as it has no MAC address#033[00m Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost kernel: device tap35f34486-85 entered promiscuous mode Oct 14 06:12:57 localhost NetworkManager[5972]: [1760436777.8423] manager: (tap35f34486-85): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Oct 14 06:12:57 localhost ovn_controller[156286]: 2025-10-14T10:12:57Z|00119|binding|INFO|Claiming lport 35f34486-85b7-4ffd-b608-500e8218eaa6 for this chassis. Oct 14 06:12:57 localhost ovn_controller[156286]: 2025-10-14T10:12:57Z|00120|binding|INFO|35f34486-85b7-4ffd-b608-500e8218eaa6: Claiming unknown Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost systemd-udevd[326383]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.860 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-09d46e83-578f-4f8d-865a-6f75a3ef1025', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09d46e83-578f-4f8d-865a-6f75a3ef1025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c1ab5e91446409bbe9b95f0f44fd3af', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88408c1f-c146-4c46-8790-7fd321b5cb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=35f34486-85b7-4ffd-b608-500e8218eaa6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.863 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 35f34486-85b7-4ffd-b608-500e8218eaa6 in datapath 09d46e83-578f-4f8d-865a-6f75a3ef1025 bound to our chassis#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.868 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port fa474357-738c-4149-9356-26a7274e0ad6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.868 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09d46e83-578f-4f8d-865a-6f75a3ef1025, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:12:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:12:57.869 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1015b4-cbb1-439e-84d8-c5f78421e5bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost ovn_controller[156286]: 2025-10-14T10:12:57Z|00121|binding|INFO|Setting lport 35f34486-85b7-4ffd-b608-500e8218eaa6 ovn-installed in OVS Oct 14 06:12:57 localhost ovn_controller[156286]: 2025-10-14T10:12:57Z|00122|binding|INFO|Setting lport 35f34486-85b7-4ffd-b608-500e8218eaa6 up in Southbound Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost journal[236030]: ethtool ioctl error on tap35f34486-85: No such device Oct 14 06:12:57 localhost nova_compute[295778]: 2025-10-14 10:12:57.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:12:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:58.039 2 INFO neutron.agent.securitygroups_rpc [req-b57563d1-e133-4056-b375-dfcfffb3c465 req-ba99f2c8-dfe0-40ed-b8ed-2897051ad36a dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['7345ec05-011e-4ded-86dc-94bcbcf0917f']#033[00m Oct 14 06:12:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:58.108 2 INFO neutron.agent.securitygroups_rpc [None req-5e1e206d-69c6-4a29-b858-d7026e93ccfa e149b330d384449aa335bc66ff84b21a 4c1ab5e91446409bbe9b95f0f44fd3af - - default default] Security group member updated ['c19816b0-9715-42c1-a697-9db8e13e1f7e']#033[00m Oct 14 06:12:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:58.539 2 INFO neutron.agent.securitygroups_rpc [req-c4643eec-66e8-42c5-8b11-066dcb7f40df req-5fb78f31-841d-432b-ae05-9090fdba3779 dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['7345ec05-011e-4ded-86dc-94bcbcf0917f']#033[00m Oct 14 06:12:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e115 do_prune osdmap full prune enabled Oct 14 06:12:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e116 e116: 6 total, 6 up, 6 in Oct 14 06:12:58 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e116: 6 total, 6 up, 6 in Oct 14 06:12:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:58.850 2 INFO neutron.agent.securitygroups_rpc [None req-a7da228c-ba85-4de2-88a7-e29995d36c73 e149b330d384449aa335bc66ff84b21a 4c1ab5e91446409bbe9b95f0f44fd3af - - default default] Security group member updated ['c19816b0-9715-42c1-a697-9db8e13e1f7e']#033[00m Oct 14 06:12:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:12:58.942 2 INFO neutron.agent.securitygroups_rpc [req-83ebe009-585c-406e-8ba0-dd94c07e5ac0 req-514f789a-26da-41eb-9b01-d2407de9e82f dcb5a2297cd24cb99021d3afeeb30262 13c2d838c66c4141a3a77483b40ab737 - - default default] Security group rule updated ['7345ec05-011e-4ded-86dc-94bcbcf0917f']#033[00m Oct 14 06:12:58 localhost podman[326454]: Oct 14 06:12:58 localhost podman[326454]: 2025-10-14 10:12:58.991618538 +0000 UTC m=+0.087900309 container create 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:59 localhost systemd[1]: Started libpod-conmon-24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2.scope. Oct 14 06:12:59 localhost podman[326454]: 2025-10-14 10:12:58.94866448 +0000 UTC m=+0.044946291 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:12:59 localhost systemd[1]: Started libcrun container. Oct 14 06:12:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dff8971a951f2ab196dae0abfce458b7725a98a790623014d84d8732a61c2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:12:59 localhost podman[326454]: 2025-10-14 10:12:59.084209989 +0000 UTC m=+0.180491770 container init 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:12:59 localhost podman[326454]: 2025-10-14 10:12:59.092503099 +0000 UTC m=+0.188784870 container start 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:12:59 localhost dnsmasq[326473]: started, version 2.85 cachesize 150 Oct 14 06:12:59 localhost dnsmasq[326473]: DNS service limited to local subnets Oct 14 06:12:59 localhost dnsmasq[326473]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:12:59 localhost dnsmasq[326473]: warning: no upstream servers configured Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:12:59 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 0 addresses Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:12:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 115 KiB/s rd, 51 KiB/s wr, 167 op/s Oct 14 06:12:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:59.163 270389 INFO neutron.agent.dhcp.agent [None req-6b3d4502-7beb-40b8-a818-2ecbcacaa591 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ece9e79d-c1d5-45ba-af0e-399ee603e8e2, ip_allocation=immediate, mac_address=fa:16:3e:68:1c:95, name=tempest-ExtraDHCPOptionsTestJSON-256504040, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:12:55Z, description=, dns_domain=, id=09d46e83-578f-4f8d-865a-6f75a3ef1025, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-847112608, port_security_enabled=True, project_id=4c1ab5e91446409bbe9b95f0f44fd3af, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6708, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=903, status=ACTIVE, subnets=['c6539584-0fff-49e7-b8d9-719623b9a18a'], tags=[], tenant_id=4c1ab5e91446409bbe9b95f0f44fd3af, updated_at=2025-10-14T10:12:56Z, vlan_transparent=None, network_id=09d46e83-578f-4f8d-865a-6f75a3ef1025, port_security_enabled=True, project_id=4c1ab5e91446409bbe9b95f0f44fd3af, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c19816b0-9715-42c1-a697-9db8e13e1f7e'], standard_attr_id=939, status=DOWN, tags=[], tenant_id=4c1ab5e91446409bbe9b95f0f44fd3af, updated_at=2025-10-14T10:12:57Z on network 09d46e83-578f-4f8d-865a-6f75a3ef1025#033[00m Oct 14 06:12:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:59.261 270389 INFO neutron.agent.dhcp.agent [None req-3572b88a-1861-4679-9e0b-8102e0a6d684 - - - - - -] DHCP configuration for ports {'d95a9609-f7a5-4060-9e94-92a5a5d80225'} is completed#033[00m Oct 14 06:12:59 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 1 addresses Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:12:59 localhost podman[326491]: 2025-10-14 10:12:59.392604044 +0000 UTC m=+0.060887783 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:12:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:59.548 270389 INFO neutron.agent.dhcp.agent [None req-42ddbd3e-ae13-45f0-b22e-9b27c8c7cd14 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=b680ff2d-5ff3-422f-8931-e0863920e571, ip_allocation=immediate, mac_address=fa:16:3e:18:d4:a4, name=tempest-ExtraDHCPOptionsTestJSON-1402063358, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:12:55Z, description=, dns_domain=, id=09d46e83-578f-4f8d-865a-6f75a3ef1025, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-847112608, port_security_enabled=True, project_id=4c1ab5e91446409bbe9b95f0f44fd3af, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6708, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=903, status=ACTIVE, subnets=['c6539584-0fff-49e7-b8d9-719623b9a18a'], tags=[], tenant_id=4c1ab5e91446409bbe9b95f0f44fd3af, updated_at=2025-10-14T10:12:56Z, vlan_transparent=None, network_id=09d46e83-578f-4f8d-865a-6f75a3ef1025, port_security_enabled=True, project_id=4c1ab5e91446409bbe9b95f0f44fd3af, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c19816b0-9715-42c1-a697-9db8e13e1f7e'], standard_attr_id=942, status=DOWN, tags=[], tenant_id=4c1ab5e91446409bbe9b95f0f44fd3af, updated_at=2025-10-14T10:12:57Z on network 09d46e83-578f-4f8d-865a-6f75a3ef1025#033[00m Oct 14 06:12:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:12:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:12:59.649 270389 INFO neutron.agent.dhcp.agent [None req-ba8129d9-5d63-4ec1-a798-e56f0aff9e3d - - - - - -] DHCP configuration for ports {'ece9e79d-c1d5-45ba-af0e-399ee603e8e2'} is completed#033[00m Oct 14 06:12:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e116 do_prune osdmap full prune enabled Oct 14 06:12:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e117 e117: 6 total, 6 up, 6 in Oct 14 06:12:59 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e117: 6 total, 6 up, 6 in Oct 14 06:12:59 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 2 addresses Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:12:59 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:12:59 localhost podman[326527]: 2025-10-14 10:12:59.816956748 +0000 UTC m=+0.064984641 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:13:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:00.065 270389 INFO neutron.agent.dhcp.agent [None req-5f8c774c-8f69-47ad-aa4e-179a40eb59c3 - - - - - -] DHCP configuration for ports {'b680ff2d-5ff3-422f-8931-e0863920e571'} is completed#033[00m Oct 14 06:13:00 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 1 addresses Oct 14 06:13:00 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:13:00 localhost podman[326564]: 2025-10-14 10:13:00.20568098 +0000 UTC m=+0.070314852 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:13:00 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:13:00 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:00.323 2 INFO neutron.agent.securitygroups_rpc [None req-a3d8b536-e13a-4da7-87d9-1b956ede87d9 e149b330d384449aa335bc66ff84b21a 4c1ab5e91446409bbe9b95f0f44fd3af - - default default] Security group member updated ['c19816b0-9715-42c1-a697-9db8e13e1f7e']#033[00m Oct 14 06:13:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:00.376 270389 INFO neutron.agent.dhcp.agent [None req-2800266b-f08e-4903-bd0a-ad200a6153aa - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:12:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=ece9e79d-c1d5-45ba-af0e-399ee603e8e2, ip_allocation=immediate, mac_address=fa:16:3e:68:1c:95, name=tempest-new-port-name-1062505797, network_id=09d46e83-578f-4f8d-865a-6f75a3ef1025, port_security_enabled=True, project_id=4c1ab5e91446409bbe9b95f0f44fd3af, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['c19816b0-9715-42c1-a697-9db8e13e1f7e'], standard_attr_id=939, status=DOWN, tags=[], tenant_id=4c1ab5e91446409bbe9b95f0f44fd3af, updated_at=2025-10-14T10:12:59Z on network 09d46e83-578f-4f8d-865a-6f75a3ef1025#033[00m Oct 14 06:13:00 localhost podman[246584]: time="2025-10-14T10:13:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:13:00 localhost podman[246584]: @ - - [14/Oct/2025:10:13:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149322 "" "Go-http-client/1.1" Oct 14 06:13:00 localhost podman[246584]: @ - - [14/Oct/2025:10:13:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20299 "" "Go-http-client/1.1" Oct 14 06:13:00 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 1 addresses Oct 14 06:13:00 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:13:00 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:13:00 localhost podman[326603]: 2025-10-14 10:13:00.691833142 +0000 UTC m=+0.126763188 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:13:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e117 do_prune osdmap full prune enabled Oct 14 06:13:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e118 e118: 6 total, 6 up, 6 in Oct 14 06:13:00 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e118: 6 total, 6 up, 6 in Oct 14 06:13:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:00.969 270389 INFO neutron.agent.dhcp.agent [None req-9fc18cbc-6846-4330-8e16-be7d4c879613 - - - - - -] DHCP configuration for ports {'ece9e79d-c1d5-45ba-af0e-399ee603e8e2'} is completed#033[00m Oct 14 06:13:01 localhost dnsmasq[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/addn_hosts - 0 addresses Oct 14 06:13:01 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/host Oct 14 06:13:01 localhost dnsmasq-dhcp[326473]: read /var/lib/neutron/dhcp/09d46e83-578f-4f8d-865a-6f75a3ef1025/opts Oct 14 06:13:01 localhost podman[326641]: 2025-10-14 10:13:01.060289186 +0000 UTC m=+0.067311613 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:13:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 171 KiB/s rd, 24 KiB/s wr, 235 op/s Oct 14 06:13:01 localhost dnsmasq[326473]: exiting on receipt of SIGTERM Oct 14 06:13:01 localhost podman[326678]: 2025-10-14 10:13:01.484337913 +0000 UTC m=+0.065515746 container kill 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:13:01 localhost systemd[1]: libpod-24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2.scope: Deactivated successfully. Oct 14 06:13:01 localhost ovn_controller[156286]: 2025-10-14T10:13:01Z|00123|binding|INFO|Removing iface tap35f34486-85 ovn-installed in OVS Oct 14 06:13:01 localhost ovn_controller[156286]: 2025-10-14T10:13:01Z|00124|binding|INFO|Removing lport 35f34486-85b7-4ffd-b608-500e8218eaa6 ovn-installed in OVS Oct 14 06:13:01 localhost nova_compute[295778]: 2025-10-14 10:13:01.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:01.492 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port fa474357-738c-4149-9356-26a7274e0ad6 with type ""#033[00m Oct 14 06:13:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:01.500 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-09d46e83-578f-4f8d-865a-6f75a3ef1025', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09d46e83-578f-4f8d-865a-6f75a3ef1025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c1ab5e91446409bbe9b95f0f44fd3af', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88408c1f-c146-4c46-8790-7fd321b5cb85, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=35f34486-85b7-4ffd-b608-500e8218eaa6) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:01.503 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 35f34486-85b7-4ffd-b608-500e8218eaa6 in datapath 09d46e83-578f-4f8d-865a-6f75a3ef1025 unbound from our chassis#033[00m Oct 14 06:13:01 localhost nova_compute[295778]: 2025-10-14 10:13:01.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:01.507 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09d46e83-578f-4f8d-865a-6f75a3ef1025, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:13:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:01.508 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8acf5a-0f2c-4346-b99d-deba67a093d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:01 localhost podman[326691]: 2025-10-14 10:13:01.568214963 +0000 UTC m=+0.068124334 container died 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:13:01 localhost systemd[1]: tmp-crun.d2s9b7.mount: Deactivated successfully. Oct 14 06:13:01 localhost podman[326691]: 2025-10-14 10:13:01.67569422 +0000 UTC m=+0.175603591 container cleanup 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:13:01 localhost systemd[1]: libpod-conmon-24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2.scope: Deactivated successfully. Oct 14 06:13:01 localhost systemd[1]: var-lib-containers-storage-overlay-e0dff8971a951f2ab196dae0abfce458b7725a98a790623014d84d8732a61c2c-merged.mount: Deactivated successfully. Oct 14 06:13:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:01 localhost podman[326696]: 2025-10-14 10:13:01.702374655 +0000 UTC m=+0.190240437 container remove 24a9a796b0968db7ea0949d6524787d3fdfde514d3261a3cdd25ddb5d45590e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-09d46e83-578f-4f8d-865a-6f75a3ef1025, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:13:01 localhost nova_compute[295778]: 2025-10-14 10:13:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:01 localhost kernel: device tap35f34486-85 left promiscuous mode Oct 14 06:13:01 localhost nova_compute[295778]: 2025-10-14 10:13:01.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:01 localhost systemd[1]: run-netns-qdhcp\x2d09d46e83\x2d578f\x2d4f8d\x2d865a\x2d6f75a3ef1025.mount: Deactivated successfully. Oct 14 06:13:01 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:01.760 270389 INFO neutron.agent.dhcp.agent [None req-109c3a9e-94b4-4c21-8313-101fa50ce332 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e118 do_prune osdmap full prune enabled Oct 14 06:13:01 localhost podman[326739]: 2025-10-14 10:13:01.837157764 +0000 UTC m=+0.061448057 container kill 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:13:01 localhost dnsmasq[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/addn_hosts - 0 addresses Oct 14 06:13:01 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/host Oct 14 06:13:01 localhost dnsmasq-dhcp[326187]: read /var/lib/neutron/dhcp/77037dfd-d1e0-4c52-b2d1-08dfead9ed93/opts Oct 14 06:13:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e119 e119: 6 total, 6 up, 6 in Oct 14 06:13:01 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e119: 6 total, 6 up, 6 in Oct 14 06:13:01 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:01.999 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost ovn_controller[156286]: 2025-10-14T10:13:02Z|00125|binding|INFO|Releasing lport c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c from this chassis (sb_readonly=0) Oct 14 06:13:02 localhost ovn_controller[156286]: 2025-10-14T10:13:02Z|00126|binding|INFO|Setting lport c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c down in Southbound Oct 14 06:13:02 localhost kernel: device tapc7fd7e94-cd left promiscuous mode Oct 14 06:13:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:02.096 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-77037dfd-d1e0-4c52-b2d1-08dfead9ed93', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77037dfd-d1e0-4c52-b2d1-08dfead9ed93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c2d838c66c4141a3a77483b40ab737', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46e6e16b-a46e-4716-84bd-5e74736016b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:02.098 161932 INFO neutron.agent.ovn.metadata.agent [-] Port c7fd7e94-cd6b-4e1d-a0a9-6d8a4969a56c in datapath 77037dfd-d1e0-4c52-b2d1-08dfead9ed93 unbound from our chassis#033[00m Oct 14 06:13:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:02.102 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:13:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:02.103 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d56482ae-8593-4db2-b510-debc459e83c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost ovn_controller[156286]: 2025-10-14T10:13:02Z|00127|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost ovn_controller[156286]: 2025-10-14T10:13:02Z|00128|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:02 localhost nova_compute[295778]: 2025-10-14 10:13:02.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 157 KiB/s rd, 22 KiB/s wr, 215 op/s Oct 14 06:13:03 localhost openstack_network_exporter[248748]: ERROR 10:13:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:13:03 localhost openstack_network_exporter[248748]: ERROR 10:13:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:13:03 localhost openstack_network_exporter[248748]: ERROR 10:13:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:13:03 localhost openstack_network_exporter[248748]: ERROR 10:13:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:13:03 localhost openstack_network_exporter[248748]: Oct 14 06:13:03 localhost openstack_network_exporter[248748]: ERROR 10:13:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:13:03 localhost openstack_network_exporter[248748]: Oct 14 06:13:03 localhost ovn_controller[156286]: 2025-10-14T10:13:03Z|00129|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:13:03 localhost nova_compute[295778]: 2025-10-14 10:13:03.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:04 localhost podman[326777]: 2025-10-14 10:13:04.009014885 +0000 UTC m=+0.069596614 container kill 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:13:04 localhost dnsmasq[326187]: exiting on receipt of SIGTERM Oct 14 06:13:04 localhost systemd[1]: tmp-crun.Z2KeBz.mount: Deactivated successfully. Oct 14 06:13:04 localhost systemd[1]: libpod-2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81.scope: Deactivated successfully. Oct 14 06:13:04 localhost podman[326792]: 2025-10-14 10:13:04.083176558 +0000 UTC m=+0.055889411 container died 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 06:13:04 localhost podman[326792]: 2025-10-14 10:13:04.12101176 +0000 UTC m=+0.093724563 container cleanup 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:13:04 localhost systemd[1]: libpod-conmon-2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81.scope: Deactivated successfully. Oct 14 06:13:04 localhost podman[326794]: 2025-10-14 10:13:04.174699972 +0000 UTC m=+0.138508419 container remove 2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-77037dfd-d1e0-4c52-b2d1-08dfead9ed93, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:13:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:04.204 270389 INFO neutron.agent.dhcp.agent [None req-2f15a4da-6b18-43d0-8b4c-c03c06e858f4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:04.205 270389 INFO neutron.agent.dhcp.agent [None req-2f15a4da-6b18-43d0-8b4c-c03c06e858f4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e119 do_prune osdmap full prune enabled Oct 14 06:13:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e120 e120: 6 total, 6 up, 6 in Oct 14 06:13:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e120: 6 total, 6 up, 6 in Oct 14 06:13:05 localhost systemd[1]: var-lib-containers-storage-overlay-4b457199e75ad73cdd78b1415b8f1bf16327e6560faf24c0237ceae852dffcbb-merged.mount: Deactivated successfully. Oct 14 06:13:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bad9127020e1e44972cd2e82b4019f3563f82f574ee12910c54620a7c1aaa81-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:05 localhost systemd[1]: run-netns-qdhcp\x2d77037dfd\x2dd1e0\x2d4c52\x2db2d1\x2d08dfead9ed93.mount: Deactivated successfully. Oct 14 06:13:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 204 KiB/s rd, 24 KiB/s wr, 281 op/s Oct 14 06:13:07 localhost nova_compute[295778]: 2025-10-14 10:13:07.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 5.2 KiB/s wr, 88 op/s Oct 14 06:13:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:07.447 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:13:07 localhost nova_compute[295778]: 2025-10-14 10:13:07.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:08.157 162030 DEBUG eventlet.wsgi.server [-] (162030) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:08.159 162030 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: Accept: */*#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: Connection: close#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: Content-Type: text/plain#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: Host: 169.254.169.254#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: User-Agent: curl/7.84.0#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: X-Forwarded-For: 10.100.0.6#015 Oct 14 06:13:08 localhost ovn_metadata_agent[161927]: X-Ovn-Network-Id: 35f103ce-4039-44a2-a9f1-269864e57b47 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 14 06:13:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:13:09 Oct 14 06:13:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:13:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:13:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_data', '.mgr', 'images', 'manila_metadata', 'vms', 'backups', 'volumes'] Oct 14 06:13:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:13:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 225 MiB data, 894 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 4.1 KiB/s wr, 70 op/s Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006584026905360134 of space, bias 1.0, pg target 1.3168053810720268 quantized to 32 (current 32) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:13:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:13:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.272 162030 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 14 06:13:09 localhost haproxy-metadata-proxy-35f103ce-4039-44a2-a9f1-269864e57b47[326029]: 10.100.0.6:38254 [14/Oct/2025:10:13:08.155] listener listener/metadata 0/0/0/1117/1117 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.273 162030 INFO eventlet.wsgi.server [-] 10.100.0.6, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1673 time: 1.1137326#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.407 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.407 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.408 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.408 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.408 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.410 2 INFO nova.compute.manager [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Terminating instance#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.411 2 DEBUG nova.compute.manager [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 14 06:13:09 localhost kernel: device tapf5a1b7e6-aa left promiscuous mode Oct 14 06:13:09 localhost NetworkManager[5972]: [1760436789.4763] device (tapf5a1b7e6-aa): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00130|binding|INFO|Releasing lport f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb from this chassis (sb_readonly=0) Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00131|binding|INFO|Setting lport f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb down in Southbound Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00132|binding|INFO|Removing iface tapf5a1b7e6-aa ovn-installed in OVS Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.528 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:4f:a4 10.100.0.6'], port_security=['fa:16:3e:4a:4f:a4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d663561-9fd7-4dea-b31c-23b820127bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35f103ce-4039-44a2-a9f1-269864e57b47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67facb686b1a45e4af5a7329836978ce', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2c1552c-9248-46c1-8391-9c390debaa3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68dfed75-146b-4653-b2d8-e1bc5ca7cd98, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00133|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00134|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00135|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.530 161932 INFO neutron.agent.ovn.metadata.agent [-] Port f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb in datapath 35f103ce-4039-44a2-a9f1-269864e57b47 unbound from our chassis#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.533 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35f103ce-4039-44a2-a9f1-269864e57b47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00136|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.534 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[653ae3bd-b5cd-4603-bd16-ca32054f6bb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.535 161932 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47 namespace which is not needed anymore#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully. Oct 14 06:13:09 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 14.781s CPU time. Oct 14 06:13:09 localhost systemd-machined[205044]: Machine qemu-5-instance-00000009 terminated. Oct 14 06:13:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e120 do_prune osdmap full prune enabled Oct 14 06:13:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e121 e121: 6 total, 6 up, 6 in Oct 14 06:13:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e121: 6 total, 6 up, 6 in Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost ovn_controller[156286]: 2025-10-14T10:13:09Z|00137|binding|INFO|Releasing lport 42f114a4-f4db-4901-9f3a-f5496e6a4392 from this chassis (sb_readonly=0) Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost NetworkManager[5972]: [1760436789.6320] manager: (tapf5a1b7e6-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/29) Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.653 2 INFO nova.virt.libvirt.driver [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Instance destroyed successfully.#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.654 2 DEBUG nova.objects.instance [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lazy-loading 'resources' on Instance uuid 9d663561-9fd7-4dea-b31c-23b820127bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.667 2 DEBUG nova.virt.libvirt.vif [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005486731.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=9,image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKRJa9flztUgTwnCl6PH+7wHHPjSI4E3ULd1AG6dlMpg0WFpMu8RmKybuAiNsf1DpcVzMtzORE22LeYcNeKsaszS3kKYeZVHRdc9csLSo0YNcaV5/5KSNFNcDAXaDqSfww==',key_name='tempest-keypair-1468241715',keypairs=,launch_index=0,launched_at=2025-10-14T10:12:33Z,launched_on='np0005486731.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005486731.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='67facb686b1a45e4af5a7329836978ce',ramdisk_id='',reservation_id='r-hychsdrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4d7273e1-0c4b-46b6-bdfa-9a43be3f063a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-359728251',owner_user_name='tempest-ServersV294TestFqdnHostnames-359728251-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-10-14T10:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a5c8b032521c4660a9f50471da931c3a',uuid=9d663561-9fd7-4dea-b31c-23b820127bbe,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.668 2 DEBUG nova.network.os_vif_util [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converting VIF {"id": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "address": "fa:16:3e:4a:4f:a4", "network": {"id": "35f103ce-4039-44a2-a9f1-269864e57b47", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-287680075-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "67facb686b1a45e4af5a7329836978ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5a1b7e6-aa", "ovs_interfaceid": "f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.669 2 DEBUG nova.network.os_vif_util [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.670 2 DEBUG os_vif [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5a1b7e6-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.682 2 DEBUG nova.compute.manager [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-unplugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.682 2 DEBUG oslo_concurrency.lockutils [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.683 2 DEBUG oslo_concurrency.lockutils [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.683 2 DEBUG oslo_concurrency.lockutils [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.683 2 DEBUG nova.compute.manager [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] No waiting events found dispatching network-vif-unplugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.684 2 DEBUG nova.compute.manager [req-8b911014-3fbd-4e2e-899b-9eefcf7da843 req-4cbfadc7-aa0d-4152-af04-9e7c69a9e850 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-unplugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.689 2 INFO os_vif [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:4f:a4,bridge_name='br-int',has_traffic_filtering=True,id=f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb,network=Network(35f103ce-4039-44a2-a9f1-269864e57b47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5a1b7e6-aa')#033[00m Oct 14 06:13:09 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [NOTICE] (326027) : haproxy version is 2.8.14-c23fe91 Oct 14 06:13:09 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [NOTICE] (326027) : path to executable is /usr/sbin/haproxy Oct 14 06:13:09 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [WARNING] (326027) : Exiting Master process... Oct 14 06:13:09 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [ALERT] (326027) : Current worker (326029) exited with code 143 (Terminated) Oct 14 06:13:09 localhost neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47[326023]: [WARNING] (326027) : All workers exited. Exiting... (0) Oct 14 06:13:09 localhost systemd[1]: libpod-2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4.scope: Deactivated successfully. Oct 14 06:13:09 localhost podman[326859]: 2025-10-14 10:13:09.743070926 +0000 UTC m=+0.078482118 container died 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:13:09 localhost podman[326859]: 2025-10-14 10:13:09.787501792 +0000 UTC m=+0.122912984 container cleanup 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:13:09 localhost podman[326890]: 2025-10-14 10:13:09.834058575 +0000 UTC m=+0.082087274 container cleanup 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:13:09 localhost systemd[1]: libpod-conmon-2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4.scope: Deactivated successfully. Oct 14 06:13:09 localhost podman[326906]: 2025-10-14 10:13:09.8897843 +0000 UTC m=+0.081141229 container remove 2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.894 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[68c381c2-1c17-44e9-a1d5-e4d9479bde98]: (4, ('Tue Oct 14 10:13:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47 (2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4)\n2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4\nTue Oct 14 10:13:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47 (2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4)\n2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.896 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[aa83f606-a657-4192-b0cb-d876fd8abd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.898 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f103ce-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost kernel: device tap35f103ce-40 left promiscuous mode Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost nova_compute[295778]: 2025-10-14 10:13:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.914 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[307cab9c-1eca-40b1-9799-d746f74770ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.936 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[183ba2b6-2e32-4978-bc90-f167fd62dd71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.937 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ea602b-0b7f-4820-98aa-bd35c797e389]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.954 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1f778626-0ffe-4fe7-917f-a284d2a3b31f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1283999, 'reachable_time': 18459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326924, 'error': None, 'target': 'ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.957 162035 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35f103ce-4039-44a2-a9f1-269864e57b47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 14 06:13:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:09.958 162035 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fec29f-769b-4c9d-b504-9b4c0f98ab95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.350 2 INFO nova.virt.libvirt.driver [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Deleting instance files /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe_del#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.351 2 INFO nova.virt.libvirt.driver [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Deletion of /var/lib/nova/instances/9d663561-9fd7-4dea-b31c-23b820127bbe_del complete#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.427 2 INFO nova.compute.manager [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.428 2 DEBUG oslo.service.loopingcall [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.428 2 DEBUG nova.compute.manager [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 14 06:13:10 localhost nova_compute[295778]: 2025-10-14 10:13:10.429 2 DEBUG nova.network.neutron [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 14 06:13:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:13:10 localhost systemd[1]: var-lib-containers-storage-overlay-fe8877213d766d2b9ee6b48536185ba6f5d1bc06a65668aca90f6247bfee032a-merged.mount: Deactivated successfully. Oct 14 06:13:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2839c5e0b9d72ece6ff0cdc0761e86d3b7be3abbd38ddb7f672bedfc7c9f5be4-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:10 localhost systemd[1]: run-netns-ovnmeta\x2d35f103ce\x2d4039\x2d44a2\x2da9f1\x2d269864e57b47.mount: Deactivated successfully. Oct 14 06:13:10 localhost podman[326926]: 2025-10-14 10:13:10.805821653 +0000 UTC m=+0.087657492 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 14 06:13:10 localhost podman[326926]: 2025-10-14 10:13:10.841929779 +0000 UTC m=+0.123765588 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0) Oct 14 06:13:10 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:13:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 146 MiB data, 867 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 6.5 KiB/s wr, 97 op/s Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.732 2 DEBUG nova.compute.manager [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.733 2 DEBUG oslo_concurrency.lockutils [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Acquiring lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.733 2 DEBUG oslo_concurrency.lockutils [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.734 2 DEBUG oslo_concurrency.lockutils [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.734 2 DEBUG nova.compute.manager [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] No waiting events found dispatching network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 14 06:13:11 localhost nova_compute[295778]: 2025-10-14 10:13:11.734 2 WARNING nova.compute.manager [req-4f8a779d-3b3b-4eae-8d46-4f46124fb383 req-a425f9ac-ef7c-424a-9e19-5b5529680378 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received unexpected event network-vif-plugged-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb for instance with vm_state active and task_state deleting.#033[00m Oct 14 06:13:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:11.849 2 INFO neutron.agent.securitygroups_rpc [req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 req-462ba4ec-ef16-49c8-b69f-f514a5e8e6e3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Security group member updated ['c2c1552c-9248-46c1-8391-9c390debaa3c']#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.095 2 DEBUG nova.network.neutron [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.126 2 INFO nova.compute.manager [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Took 1.70 seconds to deallocate network for instance.#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.142 2 DEBUG nova.compute.manager [req-6fa7891a-7686-4155-8a70-0fc3bcee4b38 req-9b4b3469-2cc4-4aac-8040-52b31695f4d4 da5827fb8ee54b95a0a3cf62fcdcc49a f669ac1a1893421f91ae49881790edbc - - default default] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Received event network-vif-deleted-f5a1b7e6-aac0-455c-bdb5-96026f8f3bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.193 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.194 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.254 2 DEBUG oslo_concurrency.processutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:13:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e121 do_prune osdmap full prune enabled Oct 14 06:13:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e122 e122: 6 total, 6 up, 6 in Oct 14 06:13:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e122: 6 total, 6 up, 6 in Oct 14 06:13:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:13:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1478861876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.723 2 DEBUG oslo_concurrency.processutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.731 2 DEBUG nova.compute.provider_tree [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.746 2 DEBUG nova.scheduler.client.report [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.770 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.813 2 INFO nova.scheduler.client.report [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Deleted allocations for instance 9d663561-9fd7-4dea-b31c-23b820127bbe#033[00m Oct 14 06:13:12 localhost nova_compute[295778]: 2025-10-14 10:13:12.899 2 DEBUG oslo_concurrency.lockutils [None req-f448f9a7-e24d-43dd-bad6-e86239ce79f3 a5c8b032521c4660a9f50471da931c3a 67facb686b1a45e4af5a7329836978ce - - default default] Lock "9d663561-9fd7-4dea-b31c-23b820127bbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 146 MiB data, 867 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 2.4 KiB/s wr, 27 op/s Oct 14 06:13:13 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:13.393 2 INFO neutron.agent.securitygroups_rpc [None req-b162a273-2c5c-4b82-977d-185565862f6c e654b0e5afc74f6c8660c559a7d225d2 a2d4f9e7e0df4c00a4b53d184050c204 - - default default] Security group member updated ['e76cc9ce-8b06-463e-9791-181ca08926cd']#033[00m Oct 14 06:13:13 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:13.716 270389 INFO neutron.agent.linux.ip_lib [None req-23043c89-e235-4337-9b31-ace48dde6937 - - - - - -] Device tap8887495a-15 cannot be used as it has no MAC address#033[00m Oct 14 06:13:13 localhost nova_compute[295778]: 2025-10-14 10:13:13.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:13 localhost kernel: device tap8887495a-15 entered promiscuous mode Oct 14 06:13:13 localhost ovn_controller[156286]: 2025-10-14T10:13:13Z|00138|binding|INFO|Claiming lport 8887495a-1577-43ab-9062-803d8800d29e for this chassis. Oct 14 06:13:13 localhost nova_compute[295778]: 2025-10-14 10:13:13.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:13 localhost NetworkManager[5972]: [1760436793.7852] manager: (tap8887495a-15): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Oct 14 06:13:13 localhost ovn_controller[156286]: 2025-10-14T10:13:13Z|00139|binding|INFO|8887495a-1577-43ab-9062-803d8800d29e: Claiming unknown Oct 14 06:13:13 localhost systemd-udevd[326977]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:13:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:13.818 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b3491b9b-b58a-4f3c-a043-e03d52c36044', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b3491b9b-b58a-4f3c-a043-e03d52c36044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aadbca62f85049bbb5689b00ddbce91d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36fa2aef-ce53-4c4c-b5e5-d803d4b9c294, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8887495a-1577-43ab-9062-803d8800d29e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:13.820 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 8887495a-1577-43ab-9062-803d8800d29e in datapath b3491b9b-b58a-4f3c-a043-e03d52c36044 bound to our chassis#033[00m Oct 14 06:13:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:13.825 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b3491b9b-b58a-4f3c-a043-e03d52c36044 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:13:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:13.827 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3d4973-ccd2-4d67-bda9-4fc74493c3a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost ovn_controller[156286]: 2025-10-14T10:13:13Z|00140|binding|INFO|Setting lport 8887495a-1577-43ab-9062-803d8800d29e ovn-installed in OVS Oct 14 06:13:13 localhost ovn_controller[156286]: 2025-10-14T10:13:13Z|00141|binding|INFO|Setting lport 8887495a-1577-43ab-9062-803d8800d29e up in Southbound Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost nova_compute[295778]: 2025-10-14 10:13:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost journal[236030]: ethtool ioctl error on tap8887495a-15: No such device Oct 14 06:13:13 localhost nova_compute[295778]: 2025-10-14 10:13:13.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:13 localhost nova_compute[295778]: 2025-10-14 10:13:13.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:14.148 2 INFO neutron.agent.securitygroups_rpc [None req-424a51ba-7455-4cd3-8a57-ddbc38b4f9ef e654b0e5afc74f6c8660c559a7d225d2 a2d4f9e7e0df4c00a4b53d184050c204 - - default default] Security group member updated ['e76cc9ce-8b06-463e-9791-181ca08926cd']#033[00m Oct 14 06:13:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:14.184 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:14 localhost nova_compute[295778]: 2025-10-14 10:13:14.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:14 localhost podman[327048]: Oct 14 06:13:14 localhost podman[327048]: 2025-10-14 10:13:14.722111517 +0000 UTC m=+0.135036586 container create 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 06:13:14 localhost podman[327048]: 2025-10-14 10:13:14.636924432 +0000 UTC m=+0.049849581 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:13:14 localhost systemd[1]: Started libpod-conmon-56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622.scope. Oct 14 06:13:14 localhost systemd[1]: tmp-crun.8oKCCc.mount: Deactivated successfully. Oct 14 06:13:14 localhost systemd[1]: Started libcrun container. Oct 14 06:13:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3b83895763e97506aa877ba94d903427dec0f228721d2cfbfd94cd9e1ca3cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:13:14 localhost podman[327048]: 2025-10-14 10:13:14.815462258 +0000 UTC m=+0.228387327 container init 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:13:14 localhost podman[327048]: 2025-10-14 10:13:14.823398689 +0000 UTC m=+0.236323768 container start 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:13:14 localhost dnsmasq[327066]: started, version 2.85 cachesize 150 Oct 14 06:13:14 localhost dnsmasq[327066]: DNS service limited to local subnets Oct 14 06:13:14 localhost dnsmasq[327066]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:13:14 localhost dnsmasq[327066]: warning: no upstream servers configured Oct 14 06:13:14 localhost dnsmasq-dhcp[327066]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:13:14 localhost dnsmasq[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/addn_hosts - 0 addresses Oct 14 06:13:14 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/host Oct 14 06:13:14 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/opts Oct 14 06:13:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:14.888 270389 INFO neutron.agent.dhcp.agent [None req-23043c89-e235-4337-9b31-ace48dde6937 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:13:14Z, description=, device_id=5413a9dc-6e65-42c6-af70-278afed111e0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=939e785b-2b00-4e26-a48b-3f143de189e7, ip_allocation=immediate, mac_address=fa:16:3e:07:df:a4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:13:09Z, description=, dns_domain=, id=b3491b9b-b58a-4f3c-a043-e03d52c36044, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1404506611, port_security_enabled=True, project_id=aadbca62f85049bbb5689b00ddbce91d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65285, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1012, status=ACTIVE, subnets=['7f0bc6c0-2051-4e4f-b9d2-ff297d4124fe'], tags=[], tenant_id=aadbca62f85049bbb5689b00ddbce91d, updated_at=2025-10-14T10:13:12Z, vlan_transparent=None, network_id=b3491b9b-b58a-4f3c-a043-e03d52c36044, port_security_enabled=False, project_id=aadbca62f85049bbb5689b00ddbce91d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1046, status=DOWN, tags=[], tenant_id=aadbca62f85049bbb5689b00ddbce91d, updated_at=2025-10-14T10:13:14Z on network b3491b9b-b58a-4f3c-a043-e03d52c36044#033[00m Oct 14 06:13:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:15.092 270389 INFO neutron.agent.dhcp.agent [None req-870c362f-922c-43de-a5aa-2ce78efd92b0 - - - - - -] DHCP configuration for ports {'212b533f-bf52-4e1f-a4be-9c29e3510717'} is completed#033[00m Oct 14 06:13:15 localhost dnsmasq[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/addn_hosts - 1 addresses Oct 14 06:13:15 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/host Oct 14 06:13:15 localhost podman[327085]: 2025-10-14 10:13:15.12676529 +0000 UTC m=+0.070815676 container kill 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:13:15 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/opts Oct 14 06:13:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 161 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.0 MiB/s wr, 75 op/s Oct 14 06:13:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:15.427 270389 INFO neutron.agent.dhcp.agent [None req-b6e20fcb-0b94-4531-818e-466396cb47ff - - - - - -] DHCP configuration for ports {'939e785b-2b00-4e26-a48b-3f143de189e7'} is completed#033[00m Oct 14 06:13:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:13:15 localhost podman[327106]: 2025-10-14 10:13:15.808042308 +0000 UTC m=+0.099223319 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:13:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:13:15 localhost podman[327106]: 2025-10-14 10:13:15.827928514 +0000 UTC m=+0.119109515 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:13:15 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:13:15 localhost systemd[1]: tmp-crun.PyYRQT.mount: Deactivated successfully. Oct 14 06:13:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:15.923 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:13:14Z, description=, device_id=5413a9dc-6e65-42c6-af70-278afed111e0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=939e785b-2b00-4e26-a48b-3f143de189e7, ip_allocation=immediate, mac_address=fa:16:3e:07:df:a4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:13:09Z, description=, dns_domain=, id=b3491b9b-b58a-4f3c-a043-e03d52c36044, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1404506611, port_security_enabled=True, project_id=aadbca62f85049bbb5689b00ddbce91d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=65285, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1012, status=ACTIVE, subnets=['7f0bc6c0-2051-4e4f-b9d2-ff297d4124fe'], tags=[], tenant_id=aadbca62f85049bbb5689b00ddbce91d, updated_at=2025-10-14T10:13:12Z, vlan_transparent=None, network_id=b3491b9b-b58a-4f3c-a043-e03d52c36044, port_security_enabled=False, project_id=aadbca62f85049bbb5689b00ddbce91d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1046, status=DOWN, tags=[], tenant_id=aadbca62f85049bbb5689b00ddbce91d, updated_at=2025-10-14T10:13:14Z on network b3491b9b-b58a-4f3c-a043-e03d52c36044#033[00m Oct 14 06:13:15 localhost podman[327129]: 2025-10-14 10:13:15.929875303 +0000 UTC m=+0.092126840 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Oct 14 06:13:15 localhost podman[327129]: 2025-10-14 10:13:15.970319394 +0000 UTC m=+0.132570991 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:13:15 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:13:16 localhost dnsmasq[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/addn_hosts - 1 addresses Oct 14 06:13:16 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/host Oct 14 06:13:16 localhost podman[327163]: 2025-10-14 10:13:16.118800205 +0000 UTC m=+0.059406734 container kill 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:13:16 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/opts Oct 14 06:13:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:16.376 270389 INFO neutron.agent.dhcp.agent [None req-2bb16b43-fc7f-42bb-96e7-cba0e5f85dd2 - - - - - -] DHCP configuration for ports {'939e785b-2b00-4e26-a48b-3f143de189e7'} is completed#033[00m Oct 14 06:13:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 161 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.0 MiB/s wr, 75 op/s Oct 14 06:13:17 localhost nova_compute[295778]: 2025-10-14 10:13:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:17 localhost dnsmasq[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/addn_hosts - 0 addresses Oct 14 06:13:17 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/host Oct 14 06:13:17 localhost podman[327199]: 2025-10-14 10:13:17.89833113 +0000 UTC m=+0.058763307 container kill 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:13:17 localhost dnsmasq-dhcp[327066]: read /var/lib/neutron/dhcp/b3491b9b-b58a-4f3c-a043-e03d52c36044/opts Oct 14 06:13:18 localhost nova_compute[295778]: 2025-10-14 10:13:18.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:18 localhost kernel: device tap8887495a-15 left promiscuous mode Oct 14 06:13:18 localhost ovn_controller[156286]: 2025-10-14T10:13:18Z|00142|binding|INFO|Releasing lport 8887495a-1577-43ab-9062-803d8800d29e from this chassis (sb_readonly=0) Oct 14 06:13:18 localhost ovn_controller[156286]: 2025-10-14T10:13:18Z|00143|binding|INFO|Setting lport 8887495a-1577-43ab-9062-803d8800d29e down in Southbound Oct 14 06:13:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:18.086 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b3491b9b-b58a-4f3c-a043-e03d52c36044', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b3491b9b-b58a-4f3c-a043-e03d52c36044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aadbca62f85049bbb5689b00ddbce91d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36fa2aef-ce53-4c4c-b5e5-d803d4b9c294, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8887495a-1577-43ab-9062-803d8800d29e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:18.087 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 8887495a-1577-43ab-9062-803d8800d29e in datapath b3491b9b-b58a-4f3c-a043-e03d52c36044 unbound from our chassis#033[00m Oct 14 06:13:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:18.090 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b3491b9b-b58a-4f3c-a043-e03d52c36044 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:13:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:18.091 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e26c17-475f-4a5c-bedf-4fd1a85e9274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:18 localhost nova_compute[295778]: 2025-10-14 10:13:18.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e122 do_prune osdmap full prune enabled Oct 14 06:13:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e123 e123: 6 total, 6 up, 6 in Oct 14 06:13:18 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e123: 6 total, 6 up, 6 in Oct 14 06:13:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 161 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.0 MiB/s wr, 47 op/s Oct 14 06:13:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e123 do_prune osdmap full prune enabled Oct 14 06:13:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e124 e124: 6 total, 6 up, 6 in Oct 14 06:13:19 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e124: 6 total, 6 up, 6 in Oct 14 06:13:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.926 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:13:19 localhost nova_compute[295778]: 2025-10-14 10:13:19.926 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:13:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:20.200 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:13:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1126328869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.424 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.590 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.591 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11458MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.592 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.592 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e124 do_prune osdmap full prune enabled Oct 14 06:13:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e125 e125: 6 total, 6 up, 6 in Oct 14 06:13:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e125: 6 total, 6 up, 6 in Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.646 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.646 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:13:20 localhost nova_compute[295778]: 2025-10-14 10:13:20.671 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:13:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:13:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1108218758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.142 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.151 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:13:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 16 MiB/s wr, 94 op/s Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.176 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:13:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:21.196 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.218 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.218 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:21.558 270389 INFO neutron.agent.linux.ip_lib [None req-c36bf633-47b1-4695-b7d2-3cf47d69c9a6 - - - - - -] Device tapee61a2bf-09 cannot be used as it has no MAC address#033[00m Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:21 localhost kernel: device tapee61a2bf-09 entered promiscuous mode Oct 14 06:13:21 localhost NetworkManager[5972]: [1760436801.5927] manager: (tapee61a2bf-09): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Oct 14 06:13:21 localhost ovn_controller[156286]: 2025-10-14T10:13:21Z|00144|binding|INFO|Claiming lport ee61a2bf-099c-4c45-b8eb-51fc84afc19d for this chassis. Oct 14 06:13:21 localhost ovn_controller[156286]: 2025-10-14T10:13:21Z|00145|binding|INFO|ee61a2bf-099c-4c45-b8eb-51fc84afc19d: Claiming unknown Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:21 localhost systemd-udevd[327315]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:13:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:21.604 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-06e0d5c0-0d26-410e-9d73-d42daa0e4f43', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06e0d5c0-0d26-410e-9d73-d42daa0e4f43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aadbca62f85049bbb5689b00ddbce91d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3d47c7e-5ba7-4b2a-9a53-4baa3b423d30, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ee61a2bf-099c-4c45-b8eb-51fc84afc19d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:21.606 161932 INFO neutron.agent.ovn.metadata.agent [-] Port ee61a2bf-099c-4c45-b8eb-51fc84afc19d in datapath 06e0d5c0-0d26-410e-9d73-d42daa0e4f43 bound to our chassis#033[00m Oct 14 06:13:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:21.608 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 06e0d5c0-0d26-410e-9d73-d42daa0e4f43 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:13:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:21.609 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d71c1ab4-2e31-4cfd-997e-86cc406887eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:13:21 localhost ovn_controller[156286]: 2025-10-14T10:13:21Z|00146|binding|INFO|Setting lport ee61a2bf-099c-4c45-b8eb-51fc84afc19d ovn-installed in OVS Oct 14 06:13:21 localhost ovn_controller[156286]: 2025-10-14T10:13:21Z|00147|binding|INFO|Setting lport ee61a2bf-099c-4c45-b8eb-51fc84afc19d up in Southbound Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost journal[236030]: ethtool ioctl error on tapee61a2bf-09: No such device Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:21 localhost nova_compute[295778]: 2025-10-14 10:13:21.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:13:21 localhost podman[327329]: 2025-10-14 10:13:21.747925498 +0000 UTC m=+0.101401785 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3) Oct 14 06:13:21 localhost podman[327329]: 2025-10-14 10:13:21.757865511 +0000 UTC m=+0.111341838 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:13:21 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:13:21 localhost podman[327372]: 2025-10-14 10:13:21.865619724 +0000 UTC m=+0.107381943 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid) Oct 14 06:13:21 localhost podman[327372]: 2025-10-14 10:13:21.883188509 +0000 UTC m=+0.124950688 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:13:21 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:13:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:13:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:13:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:13:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:13:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:13:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:13:22 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 25b458b4-d4b7-4314-9bb9-9a9bc51927cd (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:13:22 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 25b458b4-d4b7-4314-9bb9-9a9bc51927cd (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:13:22 localhost ceph-mgr[300442]: [progress INFO root] Completed event 25b458b4-d4b7-4314-9bb9-9a9bc51927cd (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:13:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:13:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:13:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:13:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:13:22 localhost podman[327468]: Oct 14 06:13:22 localhost podman[327468]: 2025-10-14 10:13:22.535147961 +0000 UTC m=+0.095180101 container create 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:13:22 localhost systemd[1]: Started libpod-conmon-03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36.scope. Oct 14 06:13:22 localhost podman[327468]: 2025-10-14 10:13:22.487969371 +0000 UTC m=+0.048001521 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:13:22 localhost systemd[1]: Started libcrun container. Oct 14 06:13:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab899864176676e328df30d88203b5b3d615d063852bc760f723d5fabda08a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:13:22 localhost podman[327468]: 2025-10-14 10:13:22.630953747 +0000 UTC m=+0.190985897 container init 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:13:22 localhost podman[327468]: 2025-10-14 10:13:22.639274757 +0000 UTC m=+0.199306907 container start 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:13:22 localhost dnsmasq[327487]: started, version 2.85 cachesize 150 Oct 14 06:13:22 localhost dnsmasq[327487]: DNS service limited to local subnets Oct 14 06:13:22 localhost dnsmasq[327487]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:13:22 localhost dnsmasq[327487]: warning: no upstream servers configured Oct 14 06:13:22 localhost dnsmasq-dhcp[327487]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Oct 14 06:13:22 localhost dnsmasq[327487]: read /var/lib/neutron/dhcp/06e0d5c0-0d26-410e-9d73-d42daa0e4f43/addn_hosts - 0 addresses Oct 14 06:13:22 localhost dnsmasq-dhcp[327487]: read /var/lib/neutron/dhcp/06e0d5c0-0d26-410e-9d73-d42daa0e4f43/host Oct 14 06:13:22 localhost dnsmasq-dhcp[327487]: read /var/lib/neutron/dhcp/06e0d5c0-0d26-410e-9d73-d42daa0e4f43/opts Oct 14 06:13:22 localhost nova_compute[295778]: 2025-10-14 10:13:22.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:22.761 270389 INFO neutron.agent.dhcp.agent [None req-51c848c1-e7cf-4d97-97ef-4d5870a349d3 - - - - - -] DHCP configuration for ports {'195149c7-697d-4faf-a98a-5a89b7b9385f'} is completed#033[00m Oct 14 06:13:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 16 MiB/s wr, 94 op/s Oct 14 06:13:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:13:23 localhost snmpd[68028]: empty variable list in _query Oct 14 06:13:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:13:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 3192 writes, 27K keys, 3192 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.08 MB/s#012Cumulative WAL: 3192 writes, 3192 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3192 writes, 27K keys, 3192 commit groups, 1.0 writes per commit group, ingest: 48.01 MB, 0.08 MB/s#012Interval WAL: 3192 writes, 3192 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 156.1 0.24 0.09 13 0.018 0 0 0.0 0.0#012 L6 1/0 16.21 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 5.4 186.0 168.4 1.17 0.55 12 0.097 144K 6116 0.0 0.0#012 Sum 1/0 16.21 MB 0.0 0.2 0.0 0.2 0.2 0.1 0.0 6.4 154.8 166.3 1.40 0.64 25 0.056 144K 6116 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.1 0.0 6.4 155.3 166.9 1.40 0.64 24 0.058 144K 6116 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 0.0 186.0 168.4 1.17 0.55 12 0.097 144K 6116 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 159.0 0.23 0.09 12 0.019 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.036#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.39 MB/s write, 0.21 GB read, 0.36 MB/s read, 1.4 seconds#012Interval compaction: 0.23 GB write, 0.39 MB/s write, 0.21 GB read, 0.36 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d4a76f350#2 capacity: 308.00 MB usage: 22.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000164 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1123,21.82 MB,7.08589%) FilterBlock(25,426.48 KB,0.135224%) IndexBlock(25,551.05 KB,0.174718%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 14 06:13:24 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:13:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:13:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:13:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:13:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e125 do_prune osdmap full prune enabled Oct 14 06:13:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e126 e126: 6 total, 6 up, 6 in Oct 14 06:13:24 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e126: 6 total, 6 up, 6 in Oct 14 06:13:24 localhost nova_compute[295778]: 2025-10-14 10:13:24.650 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 14 06:13:24 localhost nova_compute[295778]: 2025-10-14 10:13:24.651 2 INFO nova.compute.manager [-] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] VM Stopped (Lifecycle Event)#033[00m Oct 14 06:13:24 localhost nova_compute[295778]: 2025-10-14 10:13:24.685 2 DEBUG nova.compute.manager [None req-67485022-3a22-465b-957a-38a433e69f51 - - - - - -] [instance: 9d663561-9fd7-4dea-b31c-23b820127bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 14 06:13:24 localhost nova_compute[295778]: 2025-10-14 10:13:24.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 16 MiB/s wr, 121 op/s Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.219 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.219 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:26 localhost dnsmasq[327487]: exiting on receipt of SIGTERM Oct 14 06:13:26 localhost podman[327505]: 2025-10-14 10:13:26.355921778 +0000 UTC m=+0.064744125 container kill 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:13:26 localhost systemd[1]: libpod-03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36.scope: Deactivated successfully. Oct 14 06:13:26 localhost podman[327519]: 2025-10-14 10:13:26.432026612 +0000 UTC m=+0.060259556 container died 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:13:26 localhost systemd[1]: tmp-crun.m2N5V2.mount: Deactivated successfully. Oct 14 06:13:26 localhost podman[327519]: 2025-10-14 10:13:26.475175814 +0000 UTC m=+0.103408688 container cleanup 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:13:26 localhost systemd[1]: libpod-conmon-03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36.scope: Deactivated successfully. Oct 14 06:13:26 localhost podman[327520]: 2025-10-14 10:13:26.513038357 +0000 UTC m=+0.136079174 container remove 03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e0d5c0-0d26-410e-9d73-d42daa0e4f43, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:26 localhost ovn_controller[156286]: 2025-10-14T10:13:26Z|00148|binding|INFO|Releasing lport ee61a2bf-099c-4c45-b8eb-51fc84afc19d from this chassis (sb_readonly=0) Oct 14 06:13:26 localhost ovn_controller[156286]: 2025-10-14T10:13:26Z|00149|binding|INFO|Setting lport ee61a2bf-099c-4c45-b8eb-51fc84afc19d down in Southbound Oct 14 06:13:26 localhost kernel: device tapee61a2bf-09 left promiscuous mode Oct 14 06:13:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:26.534 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-06e0d5c0-0d26-410e-9d73-d42daa0e4f43', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06e0d5c0-0d26-410e-9d73-d42daa0e4f43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aadbca62f85049bbb5689b00ddbce91d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3d47c7e-5ba7-4b2a-9a53-4baa3b423d30, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ee61a2bf-099c-4c45-b8eb-51fc84afc19d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:26.536 161932 INFO neutron.agent.ovn.metadata.agent [-] Port ee61a2bf-099c-4c45-b8eb-51fc84afc19d in datapath 06e0d5c0-0d26-410e-9d73-d42daa0e4f43 unbound from our chassis#033[00m Oct 14 06:13:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:26.538 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 06e0d5c0-0d26-410e-9d73-d42daa0e4f43 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:13:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:26.539 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[128e5c53-45af-4218-9323-79c7255efe3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:26.569 270389 INFO neutron.agent.dhcp.agent [None req-7d7ff5e1-195d-4465-a18b-df63b556740a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:26 localhost nova_compute[295778]: 2025-10-14 10:13:26.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:13:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:26.944 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 12 MiB/s wr, 92 op/s Oct 14 06:13:27 localhost nova_compute[295778]: 2025-10-14 10:13:27.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:27 localhost systemd[1]: var-lib-containers-storage-overlay-0ab899864176676e328df30d88203b5b3d615d063852bc760f723d5fabda08a7-merged.mount: Deactivated successfully. Oct 14 06:13:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03261deaaf90d566c060096b6c0ecd72aec4e05761dbc12157e0fbfe54b70e36-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:27 localhost systemd[1]: run-netns-qdhcp\x2d06e0d5c0\x2d0d26\x2d410e\x2d9d73\x2dd42daa0e4f43.mount: Deactivated successfully. Oct 14 06:13:27 localhost nova_compute[295778]: 2025-10-14 10:13:27.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:13:28 localhost systemd[1]: tmp-crun.zl5hQn.mount: Deactivated successfully. Oct 14 06:13:28 localhost podman[327567]: 2025-10-14 10:13:28.480843355 +0000 UTC m=+0.076567927 container kill 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:13:28 localhost dnsmasq[327066]: exiting on receipt of SIGTERM Oct 14 06:13:28 localhost systemd[1]: libpod-56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622.scope: Deactivated successfully. Oct 14 06:13:28 localhost podman[327602]: 2025-10-14 10:13:28.60224259 +0000 UTC m=+0.100818020 container died 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:13:28 localhost podman[327602]: 2025-10-14 10:13:28.62567162 +0000 UTC m=+0.124247030 container cleanup 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:13:28 localhost systemd[1]: libpod-conmon-56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622.scope: Deactivated successfully. Oct 14 06:13:28 localhost podman[327579]: 2025-10-14 10:13:28.582571839 +0000 UTC m=+0.112260663 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:13:28 localhost podman[327582]: 2025-10-14 10:13:28.599116187 +0000 UTC m=+0.120751168 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:13:28 localhost podman[327579]: 2025-10-14 10:13:28.66114433 +0000 UTC m=+0.190833154 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=) Oct 14 06:13:28 localhost podman[327580]: 2025-10-14 10:13:28.662903246 +0000 UTC m=+0.189371105 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, tcib_managed=true) Oct 14 06:13:28 localhost podman[327582]: 2025-10-14 10:13:28.682141995 +0000 UTC m=+0.203776956 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:13:28 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:13:28 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:13:28 localhost podman[327580]: 2025-10-14 10:13:28.732900189 +0000 UTC m=+0.259368078 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:13:28 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:13:28 localhost podman[327606]: 2025-10-14 10:13:28.818968838 +0000 UTC m=+0.311681653 container remove 56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3491b9b-b58a-4f3c-a043-e03d52c36044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:13:28 localhost nova_compute[295778]: 2025-10-14 10:13:28.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:28 localhost nova_compute[295778]: 2025-10-14 10:13:28.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:13:28 localhost nova_compute[295778]: 2025-10-14 10:13:28.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:13:28 localhost nova_compute[295778]: 2025-10-14 10:13:28.933 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:13:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e126 do_prune osdmap full prune enabled Oct 14 06:13:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e127 e127: 6 total, 6 up, 6 in Oct 14 06:13:29 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e127: 6 total, 6 up, 6 in Oct 14 06:13:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:29.057 270389 INFO neutron.agent.dhcp.agent [None req-07ec2d51-c424-4915-8571-d4be5cedcc57 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 1023 B/s wr, 20 op/s Oct 14 06:13:29 localhost systemd[1]: var-lib-containers-storage-overlay-1d3b83895763e97506aa877ba94d903427dec0f228721d2cfbfd94cd9e1ca3cd-merged.mount: Deactivated successfully. Oct 14 06:13:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56eec8a229e556cf678efbfaf9537d4937ea78c859b458074ff004d16e84b622-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:29 localhost systemd[1]: run-netns-qdhcp\x2db3491b9b\x2db58a\x2d4f3c\x2da043\x2de03d52c36044.mount: Deactivated successfully. Oct 14 06:13:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e127 do_prune osdmap full prune enabled Oct 14 06:13:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e128 e128: 6 total, 6 up, 6 in Oct 14 06:13:29 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e128: 6 total, 6 up, 6 in Oct 14 06:13:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:29.639 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:29 localhost nova_compute[295778]: 2025-10-14 10:13:29.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:29 localhost nova_compute[295778]: 2025-10-14 10:13:29.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:30 localhost podman[246584]: time="2025-10-14T10:13:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:13:30 localhost podman[246584]: @ - - [14/Oct/2025:10:13:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:13:30 localhost podman[246584]: @ - - [14/Oct/2025:10:13:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18887 "" "Go-http-client/1.1" Oct 14 06:13:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e128 do_prune osdmap full prune enabled Oct 14 06:13:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e129 e129: 6 total, 6 up, 6 in Oct 14 06:13:30 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e129: 6 total, 6 up, 6 in Oct 14 06:13:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:30.820 270389 INFO neutron.agent.linux.ip_lib [None req-ae9de35c-ed03-4e22-bd10-0184d5e32f99 - - - - - -] Device tapbec63b8a-4b cannot be used as it has no MAC address#033[00m Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost kernel: device tapbec63b8a-4b entered promiscuous mode Oct 14 06:13:30 localhost NetworkManager[5972]: [1760436810.8557] manager: (tapbec63b8a-4b): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Oct 14 06:13:30 localhost ovn_controller[156286]: 2025-10-14T10:13:30Z|00150|binding|INFO|Claiming lport bec63b8a-4b73-40a6-b72b-b8d7aa888d75 for this chassis. Oct 14 06:13:30 localhost ovn_controller[156286]: 2025-10-14T10:13:30Z|00151|binding|INFO|bec63b8a-4b73-40a6-b72b-b8d7aa888d75: Claiming unknown Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost systemd-udevd[327688]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:13:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:30.866 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-365508f7-5d28-41de-9fd7-3b1733c35155', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-365508f7-5d28-41de-9fd7-3b1733c35155', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367661f675da42768786f882cc6902ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63da96cb-11c2-4d7a-88f6-ebfd24f06eae, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bec63b8a-4b73-40a6-b72b-b8d7aa888d75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:30.868 161932 INFO neutron.agent.ovn.metadata.agent [-] Port bec63b8a-4b73-40a6-b72b-b8d7aa888d75 in datapath 365508f7-5d28-41de-9fd7-3b1733c35155 bound to our chassis#033[00m Oct 14 06:13:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:30.870 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 365508f7-5d28-41de-9fd7-3b1733c35155 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:13:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:30.871 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b6051066-aaf1-4214-ab4b-0c31690a59b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost ovn_controller[156286]: 2025-10-14T10:13:30Z|00152|binding|INFO|Setting lport bec63b8a-4b73-40a6-b72b-b8d7aa888d75 ovn-installed in OVS Oct 14 06:13:30 localhost ovn_controller[156286]: 2025-10-14T10:13:30Z|00153|binding|INFO|Setting lport bec63b8a-4b73-40a6-b72b-b8d7aa888d75 up in Southbound Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost journal[236030]: ethtool ioctl error on tapbec63b8a-4b: No such device Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:30 localhost nova_compute[295778]: 2025-10-14 10:13:30.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.8 KiB/s wr, 39 op/s Oct 14 06:13:31 localhost podman[327758]: Oct 14 06:13:31 localhost podman[327758]: 2025-10-14 10:13:31.821248055 +0000 UTC m=+0.087615232 container create a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:13:31 localhost systemd[1]: Started libpod-conmon-a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f.scope. Oct 14 06:13:31 localhost podman[327758]: 2025-10-14 10:13:31.779623402 +0000 UTC m=+0.045990599 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:13:31 localhost systemd[1]: Started libcrun container. Oct 14 06:13:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcdbd7c2da97f5268b9fe19feded1895ad14d2db31e2ae8736627b13db7b3da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:13:31 localhost nova_compute[295778]: 2025-10-14 10:13:31.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:13:31 localhost podman[327758]: 2025-10-14 10:13:31.904751446 +0000 UTC m=+0.171118623 container init a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:13:31 localhost podman[327758]: 2025-10-14 10:13:31.913120907 +0000 UTC m=+0.179488074 container start a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:13:31 localhost dnsmasq[327776]: started, version 2.85 cachesize 150 Oct 14 06:13:31 localhost dnsmasq[327776]: DNS service limited to local subnets Oct 14 06:13:31 localhost dnsmasq[327776]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:13:31 localhost dnsmasq[327776]: warning: no upstream servers configured Oct 14 06:13:31 localhost dnsmasq-dhcp[327776]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:13:31 localhost dnsmasq[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/addn_hosts - 0 addresses Oct 14 06:13:31 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/host Oct 14 06:13:31 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/opts Oct 14 06:13:32 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:32.049 270389 INFO neutron.agent.dhcp.agent [None req-6c3f5612-aee9-4d40-8943-54ed767d22b5 - - - - - -] DHCP configuration for ports {'82c126d1-2fa6-45ca-b686-81230542be09'} is completed#033[00m Oct 14 06:13:32 localhost nova_compute[295778]: 2025-10-14 10:13:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.8 KiB/s wr, 39 op/s Oct 14 06:13:33 localhost openstack_network_exporter[248748]: ERROR 10:13:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:13:33 localhost openstack_network_exporter[248748]: ERROR 10:13:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:13:33 localhost openstack_network_exporter[248748]: Oct 14 06:13:33 localhost openstack_network_exporter[248748]: ERROR 10:13:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:13:33 localhost openstack_network_exporter[248748]: ERROR 10:13:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:13:33 localhost openstack_network_exporter[248748]: ERROR 10:13:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:13:33 localhost openstack_network_exporter[248748]: Oct 14 06:13:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:34 localhost nova_compute[295778]: 2025-10-14 10:13:34.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 5.1 KiB/s wr, 65 op/s Oct 14 06:13:36 localhost nova_compute[295778]: 2025-10-14 10:13:36.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:36.328 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:36.330 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:13:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Oct 14 06:13:37 localhost nova_compute[295778]: 2025-10-14 10:13:37.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:38.006 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:13:37Z, description=, device_id=1c2c02fc-1a5b-40c2-9734-0c378885441f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d901d0d7-7066-4c15-87c6-819042c680f4, ip_allocation=immediate, mac_address=fa:16:3e:d1:15:82, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:13:27Z, description=, dns_domain=, id=365508f7-5d28-41de-9fd7-3b1733c35155, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1686754386-network, port_security_enabled=True, project_id=367661f675da42768786f882cc6902ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1104, status=ACTIVE, subnets=['c0e1296c-239f-4f27-ae7f-94f1771837fb'], tags=[], tenant_id=367661f675da42768786f882cc6902ac, updated_at=2025-10-14T10:13:29Z, vlan_transparent=None, network_id=365508f7-5d28-41de-9fd7-3b1733c35155, port_security_enabled=False, project_id=367661f675da42768786f882cc6902ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1145, status=DOWN, tags=[], tenant_id=367661f675da42768786f882cc6902ac, updated_at=2025-10-14T10:13:37Z on network 365508f7-5d28-41de-9fd7-3b1733c35155#033[00m Oct 14 06:13:38 localhost dnsmasq[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/addn_hosts - 1 addresses Oct 14 06:13:38 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/host Oct 14 06:13:38 localhost podman[327792]: 2025-10-14 10:13:38.238081283 +0000 UTC m=+0.059580068 container kill a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:13:38 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/opts Oct 14 06:13:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:38.563 270389 INFO neutron.agent.dhcp.agent [None req-710567a1-42ee-4c1b-a263-c3ef003ccfb5 - - - - - -] DHCP configuration for ports {'d901d0d7-7066-4c15-87c6-819042c680f4'} is completed#033[00m Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:13:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:13:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:39.330 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:13:37Z, description=, device_id=1c2c02fc-1a5b-40c2-9734-0c378885441f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d901d0d7-7066-4c15-87c6-819042c680f4, ip_allocation=immediate, mac_address=fa:16:3e:d1:15:82, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:13:27Z, description=, dns_domain=, id=365508f7-5d28-41de-9fd7-3b1733c35155, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1686754386-network, port_security_enabled=True, project_id=367661f675da42768786f882cc6902ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1104, status=ACTIVE, subnets=['c0e1296c-239f-4f27-ae7f-94f1771837fb'], tags=[], tenant_id=367661f675da42768786f882cc6902ac, updated_at=2025-10-14T10:13:29Z, vlan_transparent=None, network_id=365508f7-5d28-41de-9fd7-3b1733c35155, port_security_enabled=False, project_id=367661f675da42768786f882cc6902ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1145, status=DOWN, tags=[], tenant_id=367661f675da42768786f882cc6902ac, updated_at=2025-10-14T10:13:37Z on network 365508f7-5d28-41de-9fd7-3b1733c35155#033[00m Oct 14 06:13:39 localhost dnsmasq[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/addn_hosts - 1 addresses Oct 14 06:13:39 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/host Oct 14 06:13:39 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/opts Oct 14 06:13:39 localhost podman[327829]: 2025-10-14 10:13:39.566833863 +0000 UTC m=+0.059986650 container kill a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:13:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e129 do_prune osdmap full prune enabled Oct 14 06:13:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 e130: 6 total, 6 up, 6 in Oct 14 06:13:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e130: 6 total, 6 up, 6 in Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.663675) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819663761, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1061, "num_deletes": 263, "total_data_size": 979561, "memory_usage": 999352, "flush_reason": "Manual Compaction"} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819675467, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 799631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26556, "largest_seqno": 27616, "table_properties": {"data_size": 795204, "index_size": 2026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11524, "raw_average_key_size": 21, "raw_value_size": 785735, "raw_average_value_size": 1482, "num_data_blocks": 87, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436769, "oldest_key_time": 1760436769, "file_creation_time": 1760436819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11850 microseconds, and 3812 cpu microseconds. Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.675526) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 799631 bytes OK Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.675555) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.677366) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.677389) EVENT_LOG_v1 {"time_micros": 1760436819677381, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.677414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 974427, prev total WAL file size 974427, number of live WAL files 2. Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.678209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303035' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end) Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(780KB)], [48(16MB)] Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819678252, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 17798939, "oldest_snapshot_seqno": -1} Oct 14 06:13:39 localhost nova_compute[295778]: 2025-10-14 10:13:39.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12403 keys, 15800157 bytes, temperature: kUnknown Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819776540, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 15800157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15732800, "index_size": 35256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 336137, "raw_average_key_size": 27, "raw_value_size": 15524630, "raw_average_value_size": 1251, "num_data_blocks": 1305, "num_entries": 12403, "num_filter_entries": 12403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.776993) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 15800157 bytes Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.778906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.8 rd, 160.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.2 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(42.0) write-amplify(19.8) OK, records in: 12923, records dropped: 520 output_compression: NoCompression Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.778938) EVENT_LOG_v1 {"time_micros": 1760436819778922, "job": 28, "event": "compaction_finished", "compaction_time_micros": 98430, "compaction_time_cpu_micros": 51589, "output_level": 6, "num_output_files": 1, "total_output_size": 15800157, "num_input_records": 12923, "num_output_records": 12403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819779214, "job": 28, "event": "table_file_deletion", "file_number": 50} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436819781630, "job": 28, "event": "table_file_deletion", "file_number": 48} Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.678082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.781740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.781751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.781756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.781760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:13:39.781765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:13:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:39.833 270389 INFO neutron.agent.dhcp.agent [None req-5cd98166-0b0b-40d7-bd3e-86eedc616d9c - - - - - -] DHCP configuration for ports {'d901d0d7-7066-4c15-87c6-819042c680f4'} is completed#033[00m Oct 14 06:13:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:40.332 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:13:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s Oct 14 06:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:13:41 localhost podman[327851]: 2025-10-14 10:13:41.546017172 +0000 UTC m=+0.087732034 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:13:41 localhost podman[327851]: 2025-10-14 10:13:41.58710893 +0000 UTC m=+0.128823782 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:13:41 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:13:41 localhost ovn_controller[156286]: 2025-10-14T10:13:41Z|00154|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:13:41 localhost ovn_controller[156286]: 2025-10-14T10:13:41Z|00155|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:13:41 localhost ovn_controller[156286]: 2025-10-14T10:13:41Z|00156|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:41 localhost nova_compute[295778]: 2025-10-14 10:13:41.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:42 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:42.178 2 INFO neutron.agent.securitygroups_rpc [None req-0527c7af-f9c5-4631-a32d-7bf7bcdb7d05 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['18fa6a68-e215-4844-8d40-fbc027948c6c']#033[00m Oct 14 06:13:42 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:42.455 2 INFO neutron.agent.securitygroups_rpc [None req-083bb6b5-9205-407f-ba9b-c4c8032fb0d4 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['18fa6a68-e215-4844-8d40-fbc027948c6c']#033[00m Oct 14 06:13:42 localhost nova_compute[295778]: 2025-10-14 10:13:42.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s Oct 14 06:13:43 localhost nova_compute[295778]: 2025-10-14 10:13:43.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:43 localhost nova_compute[295778]: 2025-10-14 10:13:43.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:44.004 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:44.419 2 INFO neutron.agent.securitygroups_rpc [None req-db5556c0-4291-4c69-9d50-e157ec56caa1 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:44 localhost nova_compute[295778]: 2025-10-14 10:13:44.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:44.857 2 INFO neutron.agent.securitygroups_rpc [None req-f2ac292c-ef84-4b5c-a694-d09fa1b3803e 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:45 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:45.200 2 INFO neutron.agent.securitygroups_rpc [None req-57fbedb8-b01f-4a0e-8c7a-92eb222baaa6 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:45 localhost dnsmasq[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/addn_hosts - 0 addresses Oct 14 06:13:45 localhost podman[327887]: 2025-10-14 10:13:45.601468622 +0000 UTC m=+0.060910054 container kill a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:13:45 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/host Oct 14 06:13:45 localhost dnsmasq-dhcp[327776]: read /var/lib/neutron/dhcp/365508f7-5d28-41de-9fd7-3b1733c35155/opts Oct 14 06:13:45 localhost ovn_controller[156286]: 2025-10-14T10:13:45Z|00157|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:13:45 localhost ovn_controller[156286]: 2025-10-14T10:13:45Z|00158|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:13:45 localhost ovn_controller[156286]: 2025-10-14T10:13:45Z|00159|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:13:45 localhost nova_compute[295778]: 2025-10-14 10:13:45.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:45 localhost nova_compute[295778]: 2025-10-14 10:13:45.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:45 localhost nova_compute[295778]: 2025-10-14 10:13:45.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:45 localhost nova_compute[295778]: 2025-10-14 10:13:45.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:45 localhost ovn_controller[156286]: 2025-10-14T10:13:45Z|00160|binding|INFO|Releasing lport bec63b8a-4b73-40a6-b72b-b8d7aa888d75 from this chassis (sb_readonly=0) Oct 14 06:13:45 localhost ovn_controller[156286]: 2025-10-14T10:13:45Z|00161|binding|INFO|Setting lport bec63b8a-4b73-40a6-b72b-b8d7aa888d75 down in Southbound Oct 14 06:13:45 localhost kernel: device tapbec63b8a-4b left promiscuous mode Oct 14 06:13:45 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:45.803 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-365508f7-5d28-41de-9fd7-3b1733c35155', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-365508f7-5d28-41de-9fd7-3b1733c35155', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367661f675da42768786f882cc6902ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63da96cb-11c2-4d7a-88f6-ebfd24f06eae, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bec63b8a-4b73-40a6-b72b-b8d7aa888d75) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:13:45 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:45.805 161932 INFO neutron.agent.ovn.metadata.agent [-] Port bec63b8a-4b73-40a6-b72b-b8d7aa888d75 in datapath 365508f7-5d28-41de-9fd7-3b1733c35155 unbound from our chassis#033[00m Oct 14 06:13:45 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:45.808 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 365508f7-5d28-41de-9fd7-3b1733c35155, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:13:45 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:45.809 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6f41b0db-3687-4cfb-be03-b96dc2865ba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:13:45 localhost nova_compute[295778]: 2025-10-14 10:13:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:45 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:45.967 2 INFO neutron.agent.securitygroups_rpc [None req-1ff370d2-800b-4ca9-851e-65244add5b48 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:46.432 2 INFO neutron.agent.securitygroups_rpc [None req-486abca4-76a9-447a-ba2e-d070f1a46a73 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:13:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:13:46 localhost systemd[1]: tmp-crun.dcqkgH.mount: Deactivated successfully. Oct 14 06:13:46 localhost podman[327912]: 2025-10-14 10:13:46.568266784 +0000 UTC m=+0.102253062 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:13:46 localhost podman[327912]: 2025-10-14 10:13:46.577392917 +0000 UTC m=+0.111379155 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:13:46 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:13:46 localhost podman[327911]: 2025-10-14 10:13:46.534000673 +0000 UTC m=+0.072093019 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:13:46 localhost podman[327911]: 2025-10-14 10:13:46.666549779 +0000 UTC m=+0.204642145 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:13:46 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:13:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:46.788 2 INFO neutron.agent.securitygroups_rpc [None req-178455f2-b48f-405f-a703-3d79838eab5f 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:47 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:47.186 2 INFO neutron.agent.securitygroups_rpc [None req-a733e5ee-97cf-4111-870d-1addde4a2c97 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:47 localhost nova_compute[295778]: 2025-10-14 10:13:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:48 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:48.026 2 INFO neutron.agent.securitygroups_rpc [None req-551f5b9f-9bf9-4c95-94f1-8a21fc183e6a 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:48 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:48.489 2 INFO neutron.agent.securitygroups_rpc [None req-9ddcdf24-da6e-45e8-989e-284a653acb68 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:48 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:48.775 2 INFO neutron.agent.securitygroups_rpc [None req-6ab7816b-72de-46f1-aca3-41e8306cf7b8 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['d59bbc3e-1ac0-4cfe-b59e-52dd8a190279']#033[00m Oct 14 06:13:49 localhost dnsmasq[327776]: exiting on receipt of SIGTERM Oct 14 06:13:49 localhost podman[327971]: 2025-10-14 10:13:49.069092605 +0000 UTC m=+0.064349222 container kill a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:13:49 localhost systemd[1]: tmp-crun.PasirO.mount: Deactivated successfully. Oct 14 06:13:49 localhost systemd[1]: libpod-a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f.scope: Deactivated successfully. Oct 14 06:13:49 localhost podman[327984]: 2025-10-14 10:13:49.137488205 +0000 UTC m=+0.059119594 container died a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:13:49 localhost podman[327984]: 2025-10-14 10:13:49.170099753 +0000 UTC m=+0.091731082 container cleanup a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 06:13:49 localhost systemd[1]: libpod-conmon-a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f.scope: Deactivated successfully. Oct 14 06:13:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:49 localhost podman[327991]: 2025-10-14 10:13:49.226466742 +0000 UTC m=+0.137108309 container remove a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-365508f7-5d28-41de-9fd7-3b1733c35155, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:13:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:49.257 270389 INFO neutron.agent.dhcp.agent [None req-7d995ba9-71e9-4fa8-aa3f-1196bbda16ad - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:13:49.485 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:13:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:49 localhost nova_compute[295778]: 2025-10-14 10:13:49.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:49 localhost nova_compute[295778]: 2025-10-14 10:13:49.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:49 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:49.846 2 INFO neutron.agent.securitygroups_rpc [None req-af3d35a0-3d99-4797-93fe-ba3c0c8619d5 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['486a2e86-116d-4ae9-86f7-271e7452bc24']#033[00m Oct 14 06:13:50 localhost systemd[1]: var-lib-containers-storage-overlay-abcdbd7c2da97f5268b9fe19feded1895ad14d2db31e2ae8736627b13db7b3da-merged.mount: Deactivated successfully. Oct 14 06:13:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1551af0422c4ca70f033354c8257e708b37fd1f95b6d8d7789dae384aa3185f-userdata-shm.mount: Deactivated successfully. Oct 14 06:13:50 localhost systemd[1]: run-netns-qdhcp\x2d365508f7\x2d5d28\x2d41de\x2d9fd7\x2d3b1733c35155.mount: Deactivated successfully. Oct 14 06:13:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:51.424 2 INFO neutron.agent.securitygroups_rpc [None req-f6fca79c-9728-44c4-882c-2bc25ce11c6c 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['e38a69d3-2bbe-4b7f-80be-eb189b5e362a']#033[00m Oct 14 06:13:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:51.544 2 INFO neutron.agent.securitygroups_rpc [None req-16c62dab-88ad-4bfd-9aae-b337240e45da 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['e38a69d3-2bbe-4b7f-80be-eb189b5e362a']#033[00m Oct 14 06:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:13:52 localhost podman[328014]: 2025-10-14 10:13:52.544883453 +0000 UTC m=+0.087741795 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:13:52 localhost podman[328014]: 2025-10-14 10:13:52.585173175 +0000 UTC m=+0.128031547 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, container_name=iscsid) Oct 14 06:13:52 localhost podman[328015]: 2025-10-14 10:13:52.595247292 +0000 UTC m=+0.134565740 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:13:52 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:13:52 localhost podman[328015]: 2025-10-14 10:13:52.611172956 +0000 UTC m=+0.150491414 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:13:52 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:13:52 localhost nova_compute[295778]: 2025-10-14 10:13:52.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:52 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:52.768 2 INFO neutron.agent.securitygroups_rpc [None req-1e03a068-cc38-49cd-a31d-8f40ce72f97f 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['940479e4-7012-482c-a23a-4a0abd9edbc1']#033[00m Oct 14 06:13:52 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:52.990 2 INFO neutron.agent.securitygroups_rpc [None req-0c91c8fa-6f7f-4f82-8b4e-7cc1fcb4eb51 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['940479e4-7012-482c-a23a-4a0abd9edbc1']#033[00m Oct 14 06:13:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:54.404 2 INFO neutron.agent.securitygroups_rpc [None req-edfa6b41-f8e8-4b3a-b07b-6ba628b8b1f8 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:54 localhost nova_compute[295778]: 2025-10-14 10:13:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:55.187 2 INFO neutron.agent.securitygroups_rpc [None req-0772cae2-029e-4ff4-98c8-8d86758930be 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:55.467 2 INFO neutron.agent.securitygroups_rpc [None req-a1ca05b4-df34-4178-a855-6749f59c06d9 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:55.679 2 INFO neutron.agent.securitygroups_rpc [None req-3365dbb0-8534-4e33-9dda-3f8fb1a87a68 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:56.107 2 INFO neutron.agent.securitygroups_rpc [None req-16528115-3ff4-4682-9514-f888a1e69c41 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:56.369 2 INFO neutron.agent.securitygroups_rpc [None req-adcc8454-1ba4-4bed-a465-5ee863584558 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['b7f5a6b8-0995-4d3f-8fb3-d87e109ba0e1']#033[00m Oct 14 06:13:57 localhost neutron_sriov_agent[263389]: 2025-10-14 10:13:57.108 2 INFO neutron.agent.securitygroups_rpc [None req-4043034a-33c2-49f3-ae52-f643d5505d9f 879681508c614a5bb4766b7d8eed5096 570c1aeb24aa4b61a40c43f31c4e20b7 - - default default] Security group rule updated ['f8522dc9-f9f0-4f2e-9ce4-94b34244a5fd']#033[00m Oct 14 06:13:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:57.638 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:13:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:57.639 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:13:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:13:57.639 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:13:57 localhost nova_compute[295778]: 2025-10-14 10:13:57.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:13:59 localhost podman[328052]: 2025-10-14 10:13:59.556245441 +0000 UTC m=+0.086702507 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible) Oct 14 06:13:59 localhost podman[328052]: 2025-10-14 10:13:59.569026152 +0000 UTC m=+0.099483288 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, container_name=openstack_network_exporter) Oct 14 06:13:59 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:13:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:13:59 localhost podman[328053]: 2025-10-14 10:13:59.664671196 +0000 UTC m=+0.191603888 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:13:59 localhost podman[328054]: 2025-10-14 10:13:59.735991553 +0000 UTC m=+0.261488807 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:13:59 localhost podman[328053]: 2025-10-14 10:13:59.745336472 +0000 UTC m=+0.272269174 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, tcib_managed=true) Oct 14 06:13:59 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:13:59 localhost nova_compute[295778]: 2025-10-14 10:13:59.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:13:59 localhost podman[328054]: 2025-10-14 10:13:59.797874329 +0000 UTC m=+0.323371593 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:13:59 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:14:00 localhost podman[246584]: time="2025-10-14T10:14:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:14:00 localhost podman[246584]: @ - - [14/Oct/2025:10:14:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:14:00 localhost podman[246584]: @ - - [14/Oct/2025:10:14:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18870 "" "Go-http-client/1.1" Oct 14 06:14:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:02 localhost nova_compute[295778]: 2025-10-14 10:14:02.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:03 localhost openstack_network_exporter[248748]: ERROR 10:14:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:14:03 localhost openstack_network_exporter[248748]: ERROR 10:14:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:14:03 localhost openstack_network_exporter[248748]: ERROR 10:14:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:14:03 localhost openstack_network_exporter[248748]: ERROR 10:14:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:14:03 localhost openstack_network_exporter[248748]: Oct 14 06:14:03 localhost openstack_network_exporter[248748]: ERROR 10:14:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:14:03 localhost openstack_network_exporter[248748]: Oct 14 06:14:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:04 localhost nova_compute[295778]: 2025-10-14 10:14:04.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:05.713 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:07 localhost nova_compute[295778]: 2025-10-14 10:14:07.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:14:09 Oct 14 06:14:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:14:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:14:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['backups', 'manila_data', '.mgr', 'vms', 'images', 'manila_metadata', 'volumes'] Oct 14 06:14:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:14:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 14 06:14:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:14:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:09 localhost nova_compute[295778]: 2025-10-14 10:14:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:10 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:10.407 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:12 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:12.441 2 INFO neutron.agent.securitygroups_rpc [None req-a59b2844-7b34-4d6c-877c-dea7c23306b9 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:14:12 localhost podman[328118]: 2025-10-14 10:14:12.565372673 +0000 UTC m=+0.086449521 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:14:12 localhost podman[328118]: 2025-10-14 10:14:12.579107119 +0000 UTC m=+0.100183987 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:14:12 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:14:12 localhost nova_compute[295778]: 2025-10-14 10:14:12.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:13 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:13.148 2 INFO neutron.agent.securitygroups_rpc [None req-eae30e61-3493-4ccf-8e72-0db56a05d689 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:14.019 2 INFO neutron.agent.securitygroups_rpc [None req-21c1a1e4-ef12-49e3-b011-67f8db036333 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:14.213 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:14.706 2 INFO neutron.agent.securitygroups_rpc [None req-215ed238-aca1-4993-9200-eaa35ce32f87 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:14 localhost nova_compute[295778]: 2025-10-14 10:14:14.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:15.747 2 INFO neutron.agent.securitygroups_rpc [None req-d4a75c8c-e3f1-4a31-8229-40ae2dee1d6f a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:16 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:16.254 2 INFO neutron.agent.securitygroups_rpc [None req-4ee0c8d5-ef25-4663-b5ce-0f405265f491 2bf00e4bfd1e4117ae57dbbe3abd93b3 74bb29c117814a7892a70c60930de045 - - default default] Security group member updated ['c8cf527d-e0a1-47be-bc6f-70f653cc7616']#033[00m Oct 14 06:14:16 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:16.400 2 INFO neutron.agent.securitygroups_rpc [None req-d7dd8a2a-b9a1-4843-8dc0-74e7154c035a a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:17 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:17.437 2 INFO neutron.agent.securitygroups_rpc [None req-f5346351-0242-4565-bdf4-882c4ddf9389 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:14:17 localhost podman[328139]: 2025-10-14 10:14:17.567615541 +0000 UTC m=+0.085158647 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:14:17 localhost podman[328139]: 2025-10-14 10:14:17.583108352 +0000 UTC m=+0.100651468 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:14:17 localhost podman[328138]: 2025-10-14 10:14:17.620491828 +0000 UTC m=+0.137824149 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:14:17 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:14:17 localhost podman[328138]: 2025-10-14 10:14:17.655104868 +0000 UTC m=+0.172437149 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:14:17 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:14:17 localhost nova_compute[295778]: 2025-10-14 10:14:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:17 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:17.922 2 INFO neutron.agent.securitygroups_rpc [None req-65a90c9a-34f3-4407-8f07-fa618ceea691 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:18 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:18.374 2 INFO neutron.agent.securitygroups_rpc [None req-29c296f1-5053-45b3-a790-9ffaf0558f46 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:18.719 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:14:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:18.720 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:14:18 localhost nova_compute[295778]: 2025-10-14 10:14:18.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:19.663 2 INFO neutron.agent.securitygroups_rpc [None req-7d1da377-3db0-465e-8a58-6eacec09542a 2bf00e4bfd1e4117ae57dbbe3abd93b3 74bb29c117814a7892a70c60930de045 - - default default] Security group member updated ['c8cf527d-e0a1-47be-bc6f-70f653cc7616']#033[00m Oct 14 06:14:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:19.918 2 INFO neutron.agent.securitygroups_rpc [None req-1429dfc9-965f-4083-a883-4b569e676e00 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.928 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.928 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.929 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.929 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:14:19 localhost nova_compute[295778]: 2025-10-14 10:14:19.929 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:14:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:20.395 2 INFO neutron.agent.securitygroups_rpc [None req-c4020e51-22f0-42ab-a8f1-5443a3e98fd7 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:14:20 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/527515574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.478 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.700 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.703 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11467MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.703 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.704 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.784 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.784 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:14:20 localhost nova_compute[295778]: 2025-10-14 10:14:20.811 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:14:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:20.944 2 INFO neutron.agent.securitygroups_rpc [None req-f019390f-eadc-409e-adae-a8c86aad14a7 a119d95f2fc3446290208c405f40fc06 e549874548c54dd8b3b10588bdd2eec9 - - default default] Security group member updated ['f94f4b4b-b4b4-4fbc-9c6e-a13e840806df']#033[00m Oct 14 06:14:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:14:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3825745153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:14:21 localhost nova_compute[295778]: 2025-10-14 10:14:21.322 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:14:21 localhost nova_compute[295778]: 2025-10-14 10:14:21.328 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:14:21 localhost nova_compute[295778]: 2025-10-14 10:14:21.345 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:14:21 localhost nova_compute[295778]: 2025-10-14 10:14:21.348 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:14:21 localhost nova_compute[295778]: 2025-10-14 10:14:21.348 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:14:22 localhost nova_compute[295778]: 2025-10-14 10:14:22.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:14:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:14:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:14:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:14:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:14:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:14:23 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 8b9784dd-199f-479d-8cf1-15ea4c6c1a57 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:14:23 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 8b9784dd-199f-479d-8cf1-15ea4c6c1a57 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:14:23 localhost ceph-mgr[300442]: [progress INFO root] Completed event 8b9784dd-199f-479d-8cf1-15ea4c6c1a57 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:14:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:14:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:14:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:14:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:14:23 localhost podman[328309]: 2025-10-14 10:14:23.431942323 +0000 UTC m=+0.083020270 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:14:23 localhost podman[328309]: 2025-10-14 10:14:23.469157533 +0000 UTC m=+0.120235510 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 14 06:14:23 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:14:23 localhost podman[328308]: 2025-10-14 10:14:23.488804605 +0000 UTC m=+0.141456663 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:14:23 localhost podman[328308]: 2025-10-14 10:14:23.497476416 +0000 UTC m=+0.150128514 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:14:23 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:14:24 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:14:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:14:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:14:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:14:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:24 localhost nova_compute[295778]: 2025-10-14 10:14:24.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:25 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:25.722 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:14:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:27 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:27.619 2 INFO neutron.agent.securitygroups_rpc [None req-b77ddf35-4402-4af1-947b-e773082d3901 9cebd1ad9225424eb253dc6a7d396af9 96887d9c06a243c291a1dca4b8c2b18b - - default default] Security group member updated ['47646898-ac45-4242-8cca-db8d39176af7']#033[00m Oct 14 06:14:27 localhost nova_compute[295778]: 2025-10-14 10:14:27.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.345 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.346 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.346 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.346 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.346 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:28 localhost nova_compute[295778]: 2025-10-14 10:14:28.347 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:14:28 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:28.540 2 INFO neutron.agent.securitygroups_rpc [None req-ec768987-bbd6-4723-9163-14a7c5f70314 9cebd1ad9225424eb253dc6a7d396af9 96887d9c06a243c291a1dca4b8c2b18b - - default default] Security group member updated ['47646898-ac45-4242-8cca-db8d39176af7']#033[00m Oct 14 06:14:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.937 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.938 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.938 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.968 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:14:29 localhost nova_compute[295778]: 2025-10-14 10:14:29.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:14:30 localhost systemd[1]: tmp-crun.wvEKBo.mount: Deactivated successfully. Oct 14 06:14:30 localhost podman[328344]: 2025-10-14 10:14:30.553648467 +0000 UTC m=+0.094290369 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 14 06:14:30 localhost podman[328344]: 2025-10-14 10:14:30.569223321 +0000 UTC m=+0.109865253 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.) Oct 14 06:14:30 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:14:30 localhost podman[246584]: time="2025-10-14T10:14:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:14:30 localhost podman[246584]: @ - - [14/Oct/2025:10:14:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:14:30 localhost podman[246584]: @ - - [14/Oct/2025:10:14:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18889 "" "Go-http-client/1.1" Oct 14 06:14:30 localhost systemd[1]: tmp-crun.cVvN0t.mount: Deactivated successfully. Oct 14 06:14:30 localhost podman[328345]: 2025-10-14 10:14:30.714281871 +0000 UTC m=+0.249370326 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:14:30 localhost podman[328345]: 2025-10-14 10:14:30.755135677 +0000 UTC m=+0.290224202 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:14:30 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:14:30 localhost podman[328346]: 2025-10-14 10:14:30.767121195 +0000 UTC m=+0.299533178 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:14:30 localhost podman[328346]: 2025-10-14 10:14:30.777018279 +0000 UTC m=+0.309430252 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:14:30 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:14:30 localhost nova_compute[295778]: 2025-10-14 10:14:30.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:32 localhost nova_compute[295778]: 2025-10-14 10:14:32.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:32 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 14 06:14:32 localhost nova_compute[295778]: 2025-10-14 10:14:32.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:14:32 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:32.972 2 INFO neutron.agent.securitygroups_rpc [None req-493a3d89-0790-46f5-867f-08353c1442dd 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:33 localhost openstack_network_exporter[248748]: ERROR 10:14:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:14:33 localhost openstack_network_exporter[248748]: ERROR 10:14:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:14:33 localhost openstack_network_exporter[248748]: ERROR 10:14:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:14:33 localhost openstack_network_exporter[248748]: ERROR 10:14:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:14:33 localhost openstack_network_exporter[248748]: Oct 14 06:14:33 localhost openstack_network_exporter[248748]: ERROR 10:14:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:14:33 localhost openstack_network_exporter[248748]: Oct 14 06:14:34 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:34.296 2 INFO neutron.agent.securitygroups_rpc [None req-ccc8d08d-5e27-4933-991c-cca7acd585e0 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:34 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:34.395 2 INFO neutron.agent.securitygroups_rpc [None req-ccc8d08d-5e27-4933-991c-cca7acd585e0 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:34 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:34.693 2 INFO neutron.agent.securitygroups_rpc [None req-c31bf8ce-e2f4-42b1-9816-f26162218d36 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:34.713 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:34 localhost nova_compute[295778]: 2025-10-14 10:14:34.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:35 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:35.257 2 INFO neutron.agent.securitygroups_rpc [None req-af1630c5-9e5e-4607-9e63-1d078ad7f844 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:35.285 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:36.252 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:37 localhost nova_compute[295778]: 2025-10-14 10:14:37.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:38 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:38.567 2 INFO neutron.agent.securitygroups_rpc [None req-930c3719-0315-4802-aedb-d477db85cbdd 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:14:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:14:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:39 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:39.229 2 INFO neutron.agent.securitygroups_rpc [None req-1c8b7a71-aa4f-4514-bc27-ccf377daaaa0 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:40 localhost nova_compute[295778]: 2025-10-14 10:14:40.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:42 localhost nova_compute[295778]: 2025-10-14 10:14:42.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:14:43 localhost podman[328413]: 2025-10-14 10:14:43.54067677 +0000 UTC m=+0.075031037 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:14:43 localhost podman[328413]: 2025-10-14 10:14:43.552228387 +0000 UTC m=+0.086582704 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible) Oct 14 06:14:43 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:14:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:44.853 2 INFO neutron.agent.securitygroups_rpc [None req-087b4d3a-425d-4e9f-a312-99c3e3dbbf2b b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:14:45 localhost nova_compute[295778]: 2025-10-14 10:14:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:45 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:45.658 2 INFO neutron.agent.securitygroups_rpc [None req-9c4b57ab-1247-47ee-8b0d-b94e292a0e87 b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:14:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:47 localhost nova_compute[295778]: 2025-10-14 10:14:47.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:48 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:48.071 2 INFO neutron.agent.securitygroups_rpc [None req-e913d1a3-5246-4050-a905-42a51970b588 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:14:48 localhost systemd[1]: tmp-crun.ild1fX.mount: Deactivated successfully. Oct 14 06:14:48 localhost podman[328433]: 2025-10-14 10:14:48.552035242 +0000 UTC m=+0.089868172 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true) Oct 14 06:14:48 localhost podman[328433]: 2025-10-14 10:14:48.561152454 +0000 UTC m=+0.098985364 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:14:48 localhost podman[328434]: 2025-10-14 10:14:48.573051752 +0000 UTC m=+0.104671867 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:14:48 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:14:48 localhost podman[328434]: 2025-10-14 10:14:48.586069317 +0000 UTC m=+0.117689392 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:14:48 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:14:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:49 localhost systemd[1]: tmp-crun.qJN6ll.mount: Deactivated successfully. Oct 14 06:14:49 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:49.642 2 INFO neutron.agent.securitygroups_rpc [None req-a9a6ddc4-4af7-41ae-9f40-e52cdf5d095d 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:49.703 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.972 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:14:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:14:50 localhost nova_compute[295778]: 2025-10-14 10:14:50.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:51.476 2 INFO neutron.agent.securitygroups_rpc [None req-b7e36e4d-3b04-467d-a942-726901765f67 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:52 localhost nova_compute[295778]: 2025-10-14 10:14:52.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:52 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:52.848 2 INFO neutron.agent.securitygroups_rpc [None req-660da2bb-66db-4c5b-9cda-836400286a3e 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:52.912 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:14:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:14:54 localhost podman[328475]: 2025-10-14 10:14:54.535221497 +0000 UTC m=+0.079040253 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0) Oct 14 06:14:54 localhost podman[328475]: 2025-10-14 10:14:54.550536114 +0000 UTC m=+0.094354860 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:14:54 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:14:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:54.568 2 INFO neutron.agent.securitygroups_rpc [None req-567f1889-5571-45a9-97ac-31082d51ba82 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:54 localhost podman[328476]: 2025-10-14 10:14:54.601415128 +0000 UTC m=+0.141373672 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:14:54 localhost podman[328476]: 2025-10-14 10:14:54.615092542 +0000 UTC m=+0.155051076 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:14:54 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:14:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:54.813 2 INFO neutron.agent.securitygroups_rpc [None req-e928fd5e-95df-4c94-b4fd-7975f86e3b55 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:14:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:55.070 270389 INFO neutron.agent.linux.ip_lib [None req-90ee1e71-de2a-44fe-b210-a9fac4bdd6e8 - - - - - -] Device tap942ec984-3c cannot be used as it has no MAC address#033[00m Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost kernel: device tap942ec984-3c entered promiscuous mode Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00162|binding|INFO|Claiming lport 942ec984-3ceb-4ade-b895-d87ffea2ea1c for this chassis. Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00163|binding|INFO|942ec984-3ceb-4ade-b895-d87ffea2ea1c: Claiming unknown Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost NetworkManager[5972]: [1760436895.1038] manager: (tap942ec984-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Oct 14 06:14:55 localhost systemd-udevd[328522]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.120 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=942ec984-3ceb-4ade-b895-d87ffea2ea1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.124 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 942ec984-3ceb-4ade-b895-d87ffea2ea1c in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.125 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.126 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8252b1-80d0-4842-826f-6e7ea732cddb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00164|binding|INFO|Setting lport 942ec984-3ceb-4ade-b895-d87ffea2ea1c ovn-installed in OVS Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00165|binding|INFO|Setting lport 942ec984-3ceb-4ade-b895-d87ffea2ea1c up in Southbound Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost journal[236030]: ethtool ioctl error on tap942ec984-3c: No such device Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:55.640 270389 INFO neutron.agent.linux.ip_lib [None req-e8f5f919-3bf2-48bb-ace8-cf5edaf9976d - - - - - -] Device tap3f587c0d-91 cannot be used as it has no MAC address#033[00m Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost kernel: device tap3f587c0d-91 entered promiscuous mode Oct 14 06:14:55 localhost NetworkManager[5972]: [1760436895.6700] manager: (tap3f587c0d-91): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00166|binding|INFO|Claiming lport 3f587c0d-9169-4fce-9902-0017eddbdea0 for this chassis. Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00167|binding|INFO|3f587c0d-9169-4fce-9902-0017eddbdea0: Claiming unknown Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.682 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d9e53ed8-ad92-47c7-993a-500ed592c18d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9e53ed8-ad92-47c7-993a-500ed592c18d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '458840010c184f038de4a002f5b46e4a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a448f1b-677d-4a0a-8950-90770ecd1465, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f587c0d-9169-4fce-9902-0017eddbdea0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.684 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 3f587c0d-9169-4fce-9902-0017eddbdea0 in datapath d9e53ed8-ad92-47c7-993a-500ed592c18d bound to our chassis#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.689 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d9e53ed8-ad92-47c7-993a-500ed592c18d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:14:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:55.692 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[714c059f-dd19-4ced-bfd6-81847f6b6c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00168|binding|INFO|Setting lport 3f587c0d-9169-4fce-9902-0017eddbdea0 ovn-installed in OVS Oct 14 06:14:55 localhost ovn_controller[156286]: 2025-10-14T10:14:55Z|00169|binding|INFO|Setting lport 3f587c0d-9169-4fce-9902-0017eddbdea0 up in Southbound Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:55 localhost nova_compute[295778]: 2025-10-14 10:14:55.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:56 localhost podman[328618]: Oct 14 06:14:56 localhost podman[328618]: 2025-10-14 10:14:56.076155222 +0000 UTC m=+0.096641463 container create 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:14:56 localhost systemd[1]: Started libpod-conmon-1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c.scope. Oct 14 06:14:56 localhost podman[328618]: 2025-10-14 10:14:56.033360214 +0000 UTC m=+0.053846515 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:14:56 localhost systemd[1]: tmp-crun.67ct8V.mount: Deactivated successfully. Oct 14 06:14:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:56.141 2 INFO neutron.agent.securitygroups_rpc [None req-0d57410a-0cad-4e48-94db-70e77579a117 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:14:56 localhost systemd[1]: Started libcrun container. Oct 14 06:14:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24888af6df81047fe03ed1a29bec9cfbe6052f4de709fc2aa1ca68dec84bdbc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:14:56 localhost podman[328618]: 2025-10-14 10:14:56.171986171 +0000 UTC m=+0.192472422 container init 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:14:56 localhost podman[328618]: 2025-10-14 10:14:56.181682489 +0000 UTC m=+0.202168760 container start 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:14:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:56.185 2 INFO neutron.agent.securitygroups_rpc [None req-1976456b-372a-4c63-9d45-b4047db9b826 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:14:56 localhost dnsmasq[328649]: started, version 2.85 cachesize 150 Oct 14 06:14:56 localhost dnsmasq[328649]: DNS service limited to local subnets Oct 14 06:14:56 localhost dnsmasq[328649]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:14:56 localhost dnsmasq[328649]: warning: no upstream servers configured Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:14:56 localhost dnsmasq[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:14:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:56.241 270389 INFO neutron.agent.dhcp.agent [None req-90ee1e71-de2a-44fe-b210-a9fac4bdd6e8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:14:54Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4613fdc0-fd24-463f-acc6-186aac9b839e, ip_allocation=immediate, mac_address=fa:16:3e:30:3d:17, name=tempest-NetworksTestDHCPv6-1245626928, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['37069b1d-9b04-4caa-91cf-9a10034d4e98'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:14:53Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1580, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:14:54Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:14:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:56.255 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:14:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:56.337 270389 INFO neutron.agent.dhcp.agent [None req-d46ac919-f535-4a00-89df-101f47172f51 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:14:56 localhost dnsmasq[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:14:56 localhost podman[328672]: 2025-10-14 10:14:56.40460654 +0000 UTC m=+0.046325013 container kill 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:14:56 localhost podman[328714]: Oct 14 06:14:56 localhost podman[328714]: 2025-10-14 10:14:56.647531783 +0000 UTC m=+0.090973592 container create caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:14:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:56.663 270389 INFO neutron.agent.dhcp.agent [None req-72c72929-f07e-4c13-8523-e12a3425b5fa - - - - - -] DHCP configuration for ports {'4613fdc0-fd24-463f-acc6-186aac9b839e'} is completed#033[00m Oct 14 06:14:56 localhost systemd[1]: Started libpod-conmon-caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b.scope. Oct 14 06:14:56 localhost podman[328714]: 2025-10-14 10:14:56.600974644 +0000 UTC m=+0.044416493 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:14:56 localhost systemd[1]: Started libcrun container. Oct 14 06:14:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfafd6841f9025686edd6ba3227036050805ec1a9f52a93901df5cecb80afa6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:14:56 localhost dnsmasq[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:14:56 localhost dnsmasq-dhcp[328649]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:14:56 localhost podman[328743]: 2025-10-14 10:14:56.725036875 +0000 UTC m=+0.065875524 container kill 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:14:56 localhost podman[328714]: 2025-10-14 10:14:56.768521131 +0000 UTC m=+0.211962940 container init caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:14:56 localhost podman[328714]: 2025-10-14 10:14:56.777181261 +0000 UTC m=+0.220623080 container start caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:14:56 localhost dnsmasq[328765]: started, version 2.85 cachesize 150 Oct 14 06:14:56 localhost dnsmasq[328765]: DNS service limited to local subnets Oct 14 06:14:56 localhost dnsmasq[328765]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:14:56 localhost dnsmasq[328765]: warning: no upstream servers configured Oct 14 06:14:56 localhost dnsmasq-dhcp[328765]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:14:56 localhost dnsmasq[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/addn_hosts - 0 addresses Oct 14 06:14:56 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/host Oct 14 06:14:56 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/opts Oct 14 06:14:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:56.912 270389 INFO neutron.agent.dhcp.agent [None req-e6cf097e-8a6a-4151-9c04-50d14e7620e5 - - - - - -] DHCP configuration for ports {'346d512d-e08d-41c1-81ce-bc02bf525a1d'} is completed#033[00m Oct 14 06:14:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:57 localhost dnsmasq[328649]: exiting on receipt of SIGTERM Oct 14 06:14:57 localhost podman[328788]: 2025-10-14 10:14:57.241650328 +0000 UTC m=+0.065035081 container kill 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:14:57 localhost systemd[1]: libpod-1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c.scope: Deactivated successfully. Oct 14 06:14:57 localhost podman[328801]: 2025-10-14 10:14:57.316690145 +0000 UTC m=+0.062813193 container died 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:14:57 localhost podman[328801]: 2025-10-14 10:14:57.351383617 +0000 UTC m=+0.097506625 container cleanup 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:14:57 localhost systemd[1]: libpod-conmon-1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c.scope: Deactivated successfully. Oct 14 06:14:57 localhost podman[328803]: 2025-10-14 10:14:57.394267498 +0000 UTC m=+0.128165710 container remove 1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:14:57 localhost nova_compute[295778]: 2025-10-14 10:14:57.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:57 localhost ovn_controller[156286]: 2025-10-14T10:14:57Z|00170|binding|INFO|Releasing lport 942ec984-3ceb-4ade-b895-d87ffea2ea1c from this chassis (sb_readonly=0) Oct 14 06:14:57 localhost kernel: device tap942ec984-3c left promiscuous mode Oct 14 06:14:57 localhost ovn_controller[156286]: 2025-10-14T10:14:57Z|00171|binding|INFO|Setting lport 942ec984-3ceb-4ade-b895-d87ffea2ea1c down in Southbound Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.415 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=942ec984-3ceb-4ade-b895-d87ffea2ea1c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.417 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 942ec984-3ceb-4ade-b895-d87ffea2ea1c in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.419 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.420 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe13520-6f72-466c-b47f-f7111002cfbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:14:57 localhost nova_compute[295778]: 2025-10-14 10:14:57.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:57 localhost systemd[1]: var-lib-containers-storage-overlay-24888af6df81047fe03ed1a29bec9cfbe6052f4de709fc2aa1ca68dec84bdbc9-merged.mount: Deactivated successfully. Oct 14 06:14:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b92292587b1b1cfe6e01f6573c4ef8ce56542e33928f8050ad64f122106b49c-userdata-shm.mount: Deactivated successfully. Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.639 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.640 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:14:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:57.640 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:14:57 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:14:57 localhost nova_compute[295778]: 2025-10-14 10:14:57.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:58.540 270389 INFO neutron.agent.linux.ip_lib [None req-9c0a8660-67a2-47ed-b3f3-b50d3f127566 - - - - - -] Device tapa9ef5d87-89 cannot be used as it has no MAC address#033[00m Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:58.564 2 INFO neutron.agent.securitygroups_rpc [None req-02e008f2-a233-4874-8613-ded4911feb59 b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:14:58 localhost kernel: device tapa9ef5d87-89 entered promiscuous mode Oct 14 06:14:58 localhost NetworkManager[5972]: [1760436898.5731] manager: (tapa9ef5d87-89): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Oct 14 06:14:58 localhost ovn_controller[156286]: 2025-10-14T10:14:58Z|00172|binding|INFO|Claiming lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 for this chassis. Oct 14 06:14:58 localhost ovn_controller[156286]: 2025-10-14T10:14:58Z|00173|binding|INFO|a9ef5d87-89ba-483d-a3c8-b8bd22f3b794: Claiming unknown Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost ovn_controller[156286]: 2025-10-14T10:14:58Z|00174|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 ovn-installed in OVS Oct 14 06:14:58 localhost ovn_controller[156286]: 2025-10-14T10:14:58Z|00175|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 up in Southbound Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:58.592 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a9ef5d87-89ba-483d-a3c8-b8bd22f3b794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:14:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:58.594 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:14:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:58.598 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:14:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:14:58.598 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[08b4486a-c678-4fc2-a1c3-7853adf50bca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:58 localhost nova_compute[295778]: 2025-10-14 10:14:58.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:14:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:14:59 localhost neutron_sriov_agent[263389]: 2025-10-14 10:14:59.449 2 INFO neutron.agent.securitygroups_rpc [None req-6190d315-fba4-49e4-99ef-d93a305a28b6 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:14:59 localhost podman[328911]: Oct 14 06:14:59 localhost podman[328911]: 2025-10-14 10:14:59.472869867 +0000 UTC m=+0.088572697 container create 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:14:59 localhost systemd[1]: Started libpod-conmon-558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904.scope. Oct 14 06:14:59 localhost podman[328911]: 2025-10-14 10:14:59.429854783 +0000 UTC m=+0.045557653 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:14:59 localhost systemd[1]: Started libcrun container. Oct 14 06:14:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedee6ca68f75cd9094ead5a82f780d7738334f3c0118e36348f401b20e0ce78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:14:59 localhost podman[328911]: 2025-10-14 10:14:59.55419659 +0000 UTC m=+0.169899420 container init 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:14:59 localhost podman[328911]: 2025-10-14 10:14:59.563506669 +0000 UTC m=+0.179209499 container start 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:14:59 localhost dnsmasq[328929]: started, version 2.85 cachesize 150 Oct 14 06:14:59 localhost dnsmasq[328929]: DNS service limited to local subnets Oct 14 06:14:59 localhost dnsmasq[328929]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:14:59 localhost dnsmasq[328929]: warning: no upstream servers configured Oct 14 06:14:59 localhost dnsmasq[328929]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:14:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:59.625 270389 INFO neutron.agent.dhcp.agent [None req-9c0a8660-67a2-47ed-b3f3-b50d3f127566 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:14:58Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5ab9c7b2-6a9d-4095-8fc9-0546db08107f, ip_allocation=immediate, mac_address=fa:16:3e:e7:62:c4, name=tempest-NetworksTestDHCPv6-878918366, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=4, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['ba58ee79-289f-4958-836a-b2c581aaffee'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:14:57Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1613, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:14:58Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:14:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:14:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:14:59.705 270389 INFO neutron.agent.dhcp.agent [None req-44b44429-0086-4701-ac98-498e18867670 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:14:59 localhost dnsmasq[328929]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:14:59 localhost podman[328947]: 2025-10-14 10:14:59.823391443 +0000 UTC m=+0.065507805 container kill 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:00 localhost nova_compute[295778]: 2025-10-14 10:15:00.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:00 localhost ovn_controller[156286]: 2025-10-14T10:15:00Z|00176|binding|INFO|Releasing lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 from this chassis (sb_readonly=0) Oct 14 06:15:00 localhost kernel: device tapa9ef5d87-89 left promiscuous mode Oct 14 06:15:00 localhost ovn_controller[156286]: 2025-10-14T10:15:00Z|00177|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 down in Southbound Oct 14 06:15:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:00.054 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a9ef5d87-89ba-483d-a3c8-b8bd22f3b794) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:00.056 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:00.058 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:00.059 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[90f3c84d-837a-4e68-8650-e9db99b8f07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:00.069 270389 INFO neutron.agent.dhcp.agent [None req-afa13871-c04b-4b30-9973-b1ea15344423 - - - - - -] DHCP configuration for ports {'5ab9c7b2-6a9d-4095-8fc9-0546db08107f'} is completed#033[00m Oct 14 06:15:00 localhost nova_compute[295778]: 2025-10-14 10:15:00.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:00 localhost nova_compute[295778]: 2025-10-14 10:15:00.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:00 localhost nova_compute[295778]: 2025-10-14 10:15:00.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:00 localhost podman[246584]: time="2025-10-14T10:15:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:15:00 localhost podman[246584]: @ - - [14/Oct/2025:10:15:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148030 "" "Go-http-client/1.1" Oct 14 06:15:00 localhost podman[246584]: @ - - [14/Oct/2025:10:15:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19829 "" "Go-http-client/1.1" Oct 14 06:15:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:15:01 localhost systemd[1]: tmp-crun.jWadfL.mount: Deactivated successfully. Oct 14 06:15:01 localhost podman[328973]: 2025-10-14 10:15:01.565547 +0000 UTC m=+0.094363531 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:15:01 localhost podman[328973]: 2025-10-14 10:15:01.601148897 +0000 UTC m=+0.129965438 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:15:01 localhost podman[328971]: 2025-10-14 10:15:01.623016149 +0000 UTC m=+0.157402238 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:15:01 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:15:01 localhost podman[328972]: 2025-10-14 10:15:01.670616965 +0000 UTC m=+0.200128214 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:01 localhost podman[328971]: 2025-10-14 10:15:01.685283925 +0000 UTC m=+0.219669984 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Oct 14 06:15:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:01.687 2 INFO neutron.agent.securitygroups_rpc [None req-da490536-bf82-49ef-911b-729ed0c16fa6 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:01 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:15:01 localhost podman[328972]: 2025-10-14 10:15:01.73731663 +0000 UTC m=+0.266827839 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:15:01 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:15:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:01.930 2 INFO neutron.agent.securitygroups_rpc [None req-b3b606fd-fef6-4f13-b26d-debe6e3790e2 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:02 localhost dnsmasq[328929]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:02 localhost podman[329054]: 2025-10-14 10:15:02.096487635 +0000 UTC m=+0.058215870 container kill 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa9ef5d87-89 not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa9ef5d87-89 not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.119 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.126 270389 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.364 270389 INFO neutron.agent.dhcp.agent [None req-3bbc15e3-85ce-408d-8d05-6f357cd71677 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.365 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.369 270389 INFO neutron.agent.dhcp.agent [-] Starting network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.370 270389 INFO neutron.agent.dhcp.agent [-] Finished network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.370 270389 INFO neutron.agent.dhcp.agent [-] Starting network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.370 270389 INFO neutron.agent.dhcp.agent [-] Finished network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:02 localhost dnsmasq[328929]: exiting on receipt of SIGTERM Oct 14 06:15:02 localhost podman[329084]: 2025-10-14 10:15:02.539519101 +0000 UTC m=+0.055288542 container kill 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:02 localhost systemd[1]: libpod-558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904.scope: Deactivated successfully. Oct 14 06:15:02 localhost systemd[1]: tmp-crun.xCKVbe.mount: Deactivated successfully. Oct 14 06:15:02 localhost podman[329098]: 2025-10-14 10:15:02.595808389 +0000 UTC m=+0.036799500 container died 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:15:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:02 localhost systemd[1]: var-lib-containers-storage-overlay-eedee6ca68f75cd9094ead5a82f780d7738334f3c0118e36348f401b20e0ce78-merged.mount: Deactivated successfully. Oct 14 06:15:02 localhost podman[329098]: 2025-10-14 10:15:02.695173023 +0000 UTC m=+0.136164074 container remove 558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:02 localhost systemd[1]: libpod-conmon-558a90581f658abd0b719de8907f0705c7ef77d21af667f615990f9c09b02904.scope: Deactivated successfully. Oct 14 06:15:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:02.757 270389 INFO neutron.agent.linux.ip_lib [-] Device tapa9ef5d87-89 cannot be used as it has no MAC address#033[00m Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost kernel: device tapa9ef5d87-89 entered promiscuous mode Oct 14 06:15:02 localhost ovn_controller[156286]: 2025-10-14T10:15:02Z|00178|binding|INFO|Claiming lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 for this chassis. Oct 14 06:15:02 localhost ovn_controller[156286]: 2025-10-14T10:15:02Z|00179|binding|INFO|a9ef5d87-89ba-483d-a3c8-b8bd22f3b794: Claiming unknown Oct 14 06:15:02 localhost NetworkManager[5972]: [1760436902.7885] manager: (tapa9ef5d87-89): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost systemd-udevd[329129]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:02.795 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a9ef5d87-89ba-483d-a3c8-b8bd22f3b794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:02.796 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:02 localhost ovn_controller[156286]: 2025-10-14T10:15:02Z|00180|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 up in Southbound Oct 14 06:15:02 localhost ovn_controller[156286]: 2025-10-14T10:15:02Z|00181|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 ovn-installed in OVS Oct 14 06:15:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:02.799 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:02.800 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[480a6fe9-cc50-4aa5-b9f7-27d22d7e1dbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost journal[236030]: ethtool ioctl error on tapa9ef5d87-89: No such device Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:02 localhost nova_compute[295778]: 2025-10-14 10:15:02.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:03 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:03.270 2 INFO neutron.agent.securitygroups_rpc [None req-9387dae5-72b9-4186-9a6c-a7f3a1f00f20 476187b4066141bb9d0e00e94ed7295c 7bf1be3a6a454996a4414fad306906f1 - - default default] Security group member updated ['a0f73c72-581b-41a5-a47e-a3f1b6149df7']#033[00m Oct 14 06:15:03 localhost openstack_network_exporter[248748]: ERROR 10:15:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:15:03 localhost openstack_network_exporter[248748]: ERROR 10:15:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:15:03 localhost openstack_network_exporter[248748]: ERROR 10:15:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:15:03 localhost openstack_network_exporter[248748]: ERROR 10:15:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:15:03 localhost openstack_network_exporter[248748]: Oct 14 06:15:03 localhost openstack_network_exporter[248748]: ERROR 10:15:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:15:03 localhost openstack_network_exporter[248748]: Oct 14 06:15:03 localhost podman[329198]: Oct 14 06:15:03 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:03.616 2 INFO neutron.agent.securitygroups_rpc [None req-28321eb3-6e5c-4c40-82a9-856981146355 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:03 localhost podman[329198]: 2025-10-14 10:15:03.625244306 +0000 UTC m=+0.093797927 container create 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:03 localhost systemd[1]: Started libpod-conmon-749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110.scope. Oct 14 06:15:03 localhost systemd[1]: tmp-crun.0USsUB.mount: Deactivated successfully. Oct 14 06:15:03 localhost systemd[1]: Started libcrun container. Oct 14 06:15:03 localhost podman[329198]: 2025-10-14 10:15:03.579601571 +0000 UTC m=+0.048155272 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad759b92adfb4464f53684c1320828cc5fd90e4e766a34820caeb50c5d4c0d79/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:03 localhost podman[329198]: 2025-10-14 10:15:03.692757572 +0000 UTC m=+0.161311223 container init 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:03 localhost podman[329198]: 2025-10-14 10:15:03.703104387 +0000 UTC m=+0.171658028 container start 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:15:03 localhost dnsmasq[329215]: started, version 2.85 cachesize 150 Oct 14 06:15:03 localhost dnsmasq[329215]: DNS service limited to local subnets Oct 14 06:15:03 localhost dnsmasq[329215]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:03 localhost dnsmasq[329215]: warning: no upstream servers configured Oct 14 06:15:03 localhost dnsmasq[329215]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:03.757 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:03.758 270389 INFO neutron.agent.dhcp.agent [None req-3bbc15e3-85ce-408d-8d05-6f357cd71677 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:03.942 270389 INFO neutron.agent.dhcp.agent [None req-0e7a55f7-c2f2-4e9b-b130-afb8b776e9a7 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a9ef5d87-89ba-483d-a3c8-b8bd22f3b794'} is completed#033[00m Oct 14 06:15:04 localhost dnsmasq[329215]: exiting on receipt of SIGTERM Oct 14 06:15:04 localhost podman[329233]: 2025-10-14 10:15:04.040171075 +0000 UTC m=+0.061020355 container kill 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:15:04 localhost systemd[1]: libpod-749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110.scope: Deactivated successfully. Oct 14 06:15:04 localhost podman[329246]: 2025-10-14 10:15:04.114981155 +0000 UTC m=+0.058085106 container died 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:04 localhost podman[329246]: 2025-10-14 10:15:04.144179241 +0000 UTC m=+0.087283142 container cleanup 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:15:04 localhost systemd[1]: libpod-conmon-749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110.scope: Deactivated successfully. Oct 14 06:15:04 localhost podman[329247]: 2025-10-14 10:15:04.204939928 +0000 UTC m=+0.140616922 container remove 749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:15:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:04.250 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:03Z, description=, device_id=6bbcb3d8-8bed-436a-b383-8ae8842802b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d27c9511-f667-449c-9738-df7a7a07bae5, ip_allocation=immediate, mac_address=fa:16:3e:2e:4e:b8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:53Z, description=, dns_domain=, id=d9e53ed8-ad92-47c7-993a-500ed592c18d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1256037055-network, port_security_enabled=True, project_id=458840010c184f038de4a002f5b46e4a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18184, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1572, status=ACTIVE, subnets=['26daf31a-98bc-4024-bf6b-43b59aba9500'], tags=[], tenant_id=458840010c184f038de4a002f5b46e4a, updated_at=2025-10-14T10:14:54Z, vlan_transparent=None, network_id=d9e53ed8-ad92-47c7-993a-500ed592c18d, port_security_enabled=False, project_id=458840010c184f038de4a002f5b46e4a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1624, status=DOWN, tags=[], tenant_id=458840010c184f038de4a002f5b46e4a, updated_at=2025-10-14T10:15:04Z on network d9e53ed8-ad92-47c7-993a-500ed592c18d#033[00m Oct 14 06:15:04 localhost nova_compute[295778]: 2025-10-14 10:15:04.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:04 localhost ovn_controller[156286]: 2025-10-14T10:15:04Z|00182|binding|INFO|Releasing lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 from this chassis (sb_readonly=0) Oct 14 06:15:04 localhost ovn_controller[156286]: 2025-10-14T10:15:04Z|00183|binding|INFO|Setting lport a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 down in Southbound Oct 14 06:15:04 localhost kernel: device tapa9ef5d87-89 left promiscuous mode Oct 14 06:15:04 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:04.271 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a9ef5d87-89ba-483d-a3c8-b8bd22f3b794) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:04 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:04.273 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a9ef5d87-89ba-483d-a3c8-b8bd22f3b794 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:04 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:04.275 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:04 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:04.276 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb31e8b-43a5-4b7d-a73d-928887ecc30b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:04 localhost nova_compute[295778]: 2025-10-14 10:15:04.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:04.369 2 INFO neutron.agent.securitygroups_rpc [None req-19708c1d-1742-44a2-b489-e8ac0413b03a b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:04.403 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:04 localhost dnsmasq[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/addn_hosts - 1 addresses Oct 14 06:15:04 localhost podman[329293]: 2025-10-14 10:15:04.462902551 +0000 UTC m=+0.063260105 container kill caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:04 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/host Oct 14 06:15:04 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/opts Oct 14 06:15:04 localhost systemd[1]: var-lib-containers-storage-overlay-ad759b92adfb4464f53684c1320828cc5fd90e4e766a34820caeb50c5d4c0d79-merged.mount: Deactivated successfully. Oct 14 06:15:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-749a670c21ed06d06df7226e198cb9e36bbe9bac6cb3da0e4796ea42d3e22110-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:04 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:04.684 2 INFO neutron.agent.securitygroups_rpc [None req-89ed6019-5c44-4ebe-ad04-5bc0e8cf21a8 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:04.744 270389 INFO neutron.agent.dhcp.agent [None req-50fcb1d0-7799-4bdc-864b-8b33ef9f5545 - - - - - -] DHCP configuration for ports {'d27c9511-f667-449c-9738-df7a7a07bae5'} is completed#033[00m Oct 14 06:15:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:04.780 2 INFO neutron.agent.securitygroups_rpc [None req-a01b87f3-ce06-413b-afd6-6cfe8241bf52 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:05.280 270389 INFO neutron.agent.linux.ip_lib [None req-4ef0ac45-2b5f-4370-b3db-10fa6134512a - - - - - -] Device tapd52022be-11 cannot be used as it has no MAC address#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost kernel: device tapd52022be-11 entered promiscuous mode Oct 14 06:15:05 localhost systemd-udevd[329131]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:05 localhost NetworkManager[5972]: [1760436905.3563] manager: (tapd52022be-11): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Oct 14 06:15:05 localhost ovn_controller[156286]: 2025-10-14T10:15:05Z|00184|binding|INFO|Claiming lport d52022be-1169-421a-959f-23b803a27458 for this chassis. Oct 14 06:15:05 localhost ovn_controller[156286]: 2025-10-14T10:15:05Z|00185|binding|INFO|d52022be-1169-421a-959f-23b803a27458: Claiming unknown Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost ovn_controller[156286]: 2025-10-14T10:15:05Z|00186|binding|INFO|Setting lport d52022be-1169-421a-959f-23b803a27458 ovn-installed in OVS Oct 14 06:15:05 localhost ovn_controller[156286]: 2025-10-14T10:15:05Z|00187|binding|INFO|Setting lport d52022be-1169-421a-959f-23b803a27458 up in Southbound Oct 14 06:15:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:05.364 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d52022be-1169-421a-959f-23b803a27458) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:05.366 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d52022be-1169-421a-959f-23b803a27458 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:05.368 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:05.369 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3a13ca-973e-4ce7-a480-18136e1dc0f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:05 localhost nova_compute[295778]: 2025-10-14 10:15:05.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:06 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:06.085 2 INFO neutron.agent.securitygroups_rpc [None req-9eac4a9e-ddab-4fe7-9be2-7771638459ee 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:06 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:06.121 2 INFO neutron.agent.securitygroups_rpc [None req-de093b06-e4ad-4bac-9ddf-80da81ca26d6 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:06 localhost podman[329376]: Oct 14 06:15:06 localhost podman[329376]: 2025-10-14 10:15:06.27168201 +0000 UTC m=+0.091074373 container create f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:15:06 localhost systemd[1]: Started libpod-conmon-f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb.scope. Oct 14 06:15:06 localhost podman[329376]: 2025-10-14 10:15:06.227875375 +0000 UTC m=+0.047267688 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:06 localhost systemd[1]: Started libcrun container. Oct 14 06:15:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47086882cc72dcec04b9f89fc0094379371c363b0d0d7ae2427a938fd57fd8ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:06 localhost podman[329376]: 2025-10-14 10:15:06.35135607 +0000 UTC m=+0.170748383 container init f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:06 localhost podman[329376]: 2025-10-14 10:15:06.360890404 +0000 UTC m=+0.180282707 container start f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:15:06 localhost dnsmasq[329394]: started, version 2.85 cachesize 150 Oct 14 06:15:06 localhost dnsmasq[329394]: DNS service limited to local subnets Oct 14 06:15:06 localhost dnsmasq[329394]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:06 localhost dnsmasq[329394]: warning: no upstream servers configured Oct 14 06:15:06 localhost dnsmasq-dhcp[329394]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:06 localhost dnsmasq[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:06 localhost dnsmasq-dhcp[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:06 localhost dnsmasq-dhcp[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:06 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:06.417 270389 INFO neutron.agent.dhcp.agent [None req-4ef0ac45-2b5f-4370-b3db-10fa6134512a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=41fb2493-e48d-4d1c-82df-6b34eb65dc61, ip_allocation=immediate, mac_address=fa:16:3e:9e:33:ad, name=tempest-NetworksTestDHCPv6-687926950, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=6, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['a98800e7-9852-46da-b1c0-9e8e731471e0'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:03Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1625, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:04Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:06 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:06.510 270389 INFO neutron.agent.dhcp.agent [None req-d1dbce9c-e3d0-4133-be36-e1592c51a448 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:06 localhost dnsmasq[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:06 localhost dnsmasq-dhcp[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:06 localhost podman[329413]: 2025-10-14 10:15:06.613458883 +0000 UTC m=+0.062379611 container kill f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:15:06 localhost dnsmasq-dhcp[329394]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:06 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:06.718 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:03Z, description=, device_id=6bbcb3d8-8bed-436a-b383-8ae8842802b2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d27c9511-f667-449c-9738-df7a7a07bae5, ip_allocation=immediate, mac_address=fa:16:3e:2e:4e:b8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:53Z, description=, dns_domain=, id=d9e53ed8-ad92-47c7-993a-500ed592c18d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1256037055-network, port_security_enabled=True, project_id=458840010c184f038de4a002f5b46e4a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18184, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1572, status=ACTIVE, subnets=['26daf31a-98bc-4024-bf6b-43b59aba9500'], tags=[], tenant_id=458840010c184f038de4a002f5b46e4a, updated_at=2025-10-14T10:14:54Z, vlan_transparent=None, network_id=d9e53ed8-ad92-47c7-993a-500ed592c18d, port_security_enabled=False, project_id=458840010c184f038de4a002f5b46e4a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1624, status=DOWN, tags=[], tenant_id=458840010c184f038de4a002f5b46e4a, updated_at=2025-10-14T10:15:04Z on network d9e53ed8-ad92-47c7-993a-500ed592c18d#033[00m Oct 14 06:15:06 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:06.825 270389 INFO neutron.agent.dhcp.agent [None req-d2141c31-d0aa-4377-887b-9774d093a86c - - - - - -] DHCP configuration for ports {'41fb2493-e48d-4d1c-82df-6b34eb65dc61'} is completed#033[00m Oct 14 06:15:06 localhost dnsmasq[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/addn_hosts - 1 addresses Oct 14 06:15:06 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/host Oct 14 06:15:06 localhost podman[329464]: 2025-10-14 10:15:06.975982588 +0000 UTC m=+0.065606547 container kill caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:06 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/opts Oct 14 06:15:07 localhost dnsmasq[329394]: exiting on receipt of SIGTERM Oct 14 06:15:07 localhost podman[329475]: 2025-10-14 10:15:07.013198057 +0000 UTC m=+0.069959162 container kill f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:07 localhost systemd[1]: libpod-f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb.scope: Deactivated successfully. Oct 14 06:15:07 localhost podman[329493]: 2025-10-14 10:15:07.081405102 +0000 UTC m=+0.055755664 container died f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:15:07 localhost podman[329493]: 2025-10-14 10:15:07.166169967 +0000 UTC m=+0.140520529 container cleanup f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:07 localhost systemd[1]: libpod-conmon-f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb.scope: Deactivated successfully. Oct 14 06:15:07 localhost podman[329495]: 2025-10-14 10:15:07.189351684 +0000 UTC m=+0.153987498 container remove f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:07 localhost nova_compute[295778]: 2025-10-14 10:15:07.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:07 localhost ovn_controller[156286]: 2025-10-14T10:15:07Z|00188|binding|INFO|Releasing lport d52022be-1169-421a-959f-23b803a27458 from this chassis (sb_readonly=0) Oct 14 06:15:07 localhost kernel: device tapd52022be-11 left promiscuous mode Oct 14 06:15:07 localhost ovn_controller[156286]: 2025-10-14T10:15:07Z|00189|binding|INFO|Setting lport d52022be-1169-421a-959f-23b803a27458 down in Southbound Oct 14 06:15:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:07.219 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d52022be-1169-421a-959f-23b803a27458) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:07.220 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d52022be-1169-421a-959f-23b803a27458 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:07 localhost nova_compute[295778]: 2025-10-14 10:15:07.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:07.225 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:07.227 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[5b76220e-ad9e-44f1-ba46-b907a5b266d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:07 localhost systemd[1]: var-lib-containers-storage-overlay-47086882cc72dcec04b9f89fc0094379371c363b0d0d7ae2427a938fd57fd8ec-merged.mount: Deactivated successfully. Oct 14 06:15:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f41eec73ab959275394a96b70787d9c1cc8b0f5bdf5454d176dc68c270d02cdb-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:07.281 270389 INFO neutron.agent.dhcp.agent [None req-c703398e-a0ec-42a4-ac27-377e69139d18 - - - - - -] DHCP configuration for ports {'d27c9511-f667-449c-9738-df7a7a07bae5'} is completed#033[00m Oct 14 06:15:07 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:07.442 270389 INFO neutron.agent.dhcp.agent [None req-a263a84f-665c-4e46-91a7-7a24c41be152 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:07.443 270389 INFO neutron.agent.dhcp.agent [None req-a263a84f-665c-4e46-91a7-7a24c41be152 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:07.444 270389 INFO neutron.agent.dhcp.agent [None req-a263a84f-665c-4e46-91a7-7a24c41be152 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:07 localhost nova_compute[295778]: 2025-10-14 10:15:07.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:08.653 2 INFO neutron.agent.securitygroups_rpc [None req-2ac4edd3-ef06-4603-af78-165f86524b6c b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:08 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:08.749 2 INFO neutron.agent.securitygroups_rpc [None req-6ff06281-5378-4836-8ecf-7135bb6770f8 476187b4066141bb9d0e00e94ed7295c 7bf1be3a6a454996a4414fad306906f1 - - default default] Security group member updated ['a0f73c72-581b-41a5-a47e-a3f1b6149df7']#033[00m Oct 14 06:15:08 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:08.777 270389 INFO neutron.agent.linux.ip_lib [None req-e1efe326-c261-4616-b953-a397c858b080 - - - - - -] Device tap8ccac8f8-24 cannot be used as it has no MAC address#033[00m Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost kernel: device tap8ccac8f8-24 entered promiscuous mode Oct 14 06:15:08 localhost NetworkManager[5972]: [1760436908.8047] manager: (tap8ccac8f8-24): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost ovn_controller[156286]: 2025-10-14T10:15:08Z|00190|binding|INFO|Claiming lport 8ccac8f8-2419-4bbc-83eb-9cc62a729914 for this chassis. Oct 14 06:15:08 localhost ovn_controller[156286]: 2025-10-14T10:15:08Z|00191|binding|INFO|8ccac8f8-2419-4bbc-83eb-9cc62a729914: Claiming unknown Oct 14 06:15:08 localhost systemd-udevd[329540]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:08 localhost ovn_controller[156286]: 2025-10-14T10:15:08Z|00192|binding|INFO|Setting lport 8ccac8f8-2419-4bbc-83eb-9cc62a729914 up in Southbound Oct 14 06:15:08 localhost ovn_controller[156286]: 2025-10-14T10:15:08Z|00193|binding|INFO|Setting lport 8ccac8f8-2419-4bbc-83eb-9cc62a729914 ovn-installed in OVS Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:08.816 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8ccac8f8-2419-4bbc-83eb-9cc62a729914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:08.819 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 8ccac8f8-2419-4bbc-83eb-9cc62a729914 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:08.821 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:08 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:08.823 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5bf8f9-cda0-4cd6-a54c-fe9d33e21490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost journal[236030]: ethtool ioctl error on tap8ccac8f8-24: No such device Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:08 localhost nova_compute[295778]: 2025-10-14 10:15:08.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:15:09 Oct 14 06:15:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:15:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:15:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['vms', 'manila_data', 'backups', '.mgr', 'images', 'manila_metadata', 'volumes'] Oct 14 06:15:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:15:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:15:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:15:09 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:09.613 2 INFO neutron.agent.securitygroups_rpc [None req-33756dd7-b8a3-4f2a-bb6b-acf9d13d094f 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:09 localhost podman[329611]: Oct 14 06:15:09 localhost podman[329611]: 2025-10-14 10:15:09.720878971 +0000 UTC m=+0.092182013 container create d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:09 localhost systemd[1]: Started libpod-conmon-d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8.scope. Oct 14 06:15:09 localhost podman[329611]: 2025-10-14 10:15:09.678678319 +0000 UTC m=+0.049981371 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:09 localhost systemd[1]: Started libcrun container. Oct 14 06:15:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7a817bac43fc1982147ccc08e1113b251175696a9e6ce3552bd0cb4ee86582/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:09 localhost podman[329611]: 2025-10-14 10:15:09.8084266 +0000 UTC m=+0.179729642 container init d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:15:09 localhost podman[329611]: 2025-10-14 10:15:09.817137632 +0000 UTC m=+0.188440674 container start d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:15:09 localhost dnsmasq[329630]: started, version 2.85 cachesize 150 Oct 14 06:15:09 localhost dnsmasq[329630]: DNS service limited to local subnets Oct 14 06:15:09 localhost dnsmasq[329630]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:09 localhost dnsmasq[329630]: warning: no upstream servers configured Oct 14 06:15:09 localhost dnsmasq-dhcp[329630]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:09 localhost dnsmasq[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:09 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:09 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:09 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:09.879 270389 INFO neutron.agent.dhcp.agent [None req-e1efe326-c261-4616-b953-a397c858b080 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:08Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1ab08e78-5b99-4db9-9a1b-2f5633f828e9, ip_allocation=immediate, mac_address=fa:16:3e:83:25:2a, name=tempest-NetworksTestDHCPv6-1581258944, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=8, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['afb2a429-9630-427e-a6d1-634c564949ba'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:07Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1655, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:09Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:09 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:09.937 270389 INFO neutron.agent.dhcp.agent [None req-17008351-2851-468f-b8f6-0317ad98001b - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:10 localhost podman[329649]: 2025-10-14 10:15:10.071100559 +0000 UTC m=+0.056977718 container kill d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:15:10 localhost dnsmasq[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:10 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:10 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:10 localhost nova_compute[295778]: 2025-10-14 10:15:10.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:10 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:10.321 270389 INFO neutron.agent.dhcp.agent [None req-65a274c3-520d-4ae6-ad3f-b6008435e1b2 - - - - - -] DHCP configuration for ports {'1ab08e78-5b99-4db9-9a1b-2f5633f828e9'} is completed#033[00m Oct 14 06:15:10 localhost ovn_controller[156286]: 2025-10-14T10:15:10Z|00194|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:15:10 localhost ovn_controller[156286]: 2025-10-14T10:15:10Z|00195|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:15:10 localhost ovn_controller[156286]: 2025-10-14T10:15:10Z|00196|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:15:10 localhost nova_compute[295778]: 2025-10-14 10:15:10.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:10 localhost nova_compute[295778]: 2025-10-14 10:15:10.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:10 localhost nova_compute[295778]: 2025-10-14 10:15:10.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:11 localhost nova_compute[295778]: 2025-10-14 10:15:11.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:11.025 2 INFO neutron.agent.securitygroups_rpc [None req-1db2801f-7f14-475b-9034-d870e9398832 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:11 localhost nova_compute[295778]: 2025-10-14 10:15:11.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:11 localhost nova_compute[295778]: 2025-10-14 10:15:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:11 localhost dnsmasq[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:11 localhost podman[329686]: 2025-10-14 10:15:11.229180997 +0000 UTC m=+0.058726783 container kill d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:15:11 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:11 localhost dnsmasq-dhcp[329630]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:11.611 2 INFO neutron.agent.securitygroups_rpc [None req-b3891885-72c1-4c57-baf6-f44e7a046970 b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:11 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:11.627 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:11 localhost podman[329723]: 2025-10-14 10:15:11.809930417 +0000 UTC m=+0.061117296 container kill d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:11 localhost systemd[1]: tmp-crun.QVytDR.mount: Deactivated successfully. Oct 14 06:15:11 localhost dnsmasq[329630]: exiting on receipt of SIGTERM Oct 14 06:15:11 localhost systemd[1]: libpod-d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8.scope: Deactivated successfully. Oct 14 06:15:11 localhost podman[329736]: 2025-10-14 10:15:11.879294803 +0000 UTC m=+0.051167502 container died d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:11 localhost systemd[1]: tmp-crun.g6RySc.mount: Deactivated successfully. Oct 14 06:15:11 localhost podman[329736]: 2025-10-14 10:15:11.920978312 +0000 UTC m=+0.092850971 container cleanup d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:11 localhost systemd[1]: libpod-conmon-d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8.scope: Deactivated successfully. Oct 14 06:15:11 localhost podman[329737]: 2025-10-14 10:15:11.95660327 +0000 UTC m=+0.123935569 container remove d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost ovn_controller[156286]: 2025-10-14T10:15:12Z|00197|binding|INFO|Releasing lport 8ccac8f8-2419-4bbc-83eb-9cc62a729914 from this chassis (sb_readonly=0) Oct 14 06:15:12 localhost ovn_controller[156286]: 2025-10-14T10:15:12Z|00198|binding|INFO|Setting lport 8ccac8f8-2419-4bbc-83eb-9cc62a729914 down in Southbound Oct 14 06:15:12 localhost kernel: device tap8ccac8f8-24 left promiscuous mode Oct 14 06:15:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:12.014 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8ccac8f8-2419-4bbc-83eb-9cc62a729914) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:12.016 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 8ccac8f8-2419-4bbc-83eb-9cc62a729914 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:12.019 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:12.020 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6c81cd-93c7-45fe-92bd-d3aa972200f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:12 localhost systemd[1]: var-lib-containers-storage-overlay-cb7a817bac43fc1982147ccc08e1113b251175696a9e6ce3552bd0cb4ee86582-merged.mount: Deactivated successfully. Oct 14 06:15:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9a1e142e293523b8c86d0eef00c92e76a25062509c8c69510c286e960444ce8-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:12 localhost nova_compute[295778]: 2025-10-14 10:15:12.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:13 localhost nova_compute[295778]: 2025-10-14 10:15:13.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:13.005 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:13.006 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:15:13 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e130 do_prune osdmap full prune enabled Oct 14 06:15:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e131 e131: 6 total, 6 up, 6 in Oct 14 06:15:13 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e131: 6 total, 6 up, 6 in Oct 14 06:15:13 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:13.430 2 INFO neutron.agent.securitygroups_rpc [None req-c2f5f097-bba7-437f-a590-36885716465b 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:13 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:13.802 2 INFO neutron.agent.securitygroups_rpc [None req-cd2f7380-4784-4a84-ad74-fc6a911c5024 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:15:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:14.029 270389 INFO neutron.agent.linux.ip_lib [None req-4f756c6f-1c83-4b12-940b-2123bd816acd - - - - - -] Device tapfbdcaa7d-1c cannot be used as it has no MAC address#033[00m Oct 14 06:15:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:14.074 2 INFO neutron.agent.securitygroups_rpc [None req-434a8d80-a33b-48b3-84be-9de6425cb325 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost kernel: device tapfbdcaa7d-1c entered promiscuous mode Oct 14 06:15:14 localhost NetworkManager[5972]: [1760436914.0875] manager: (tapfbdcaa7d-1c): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00199|binding|INFO|Claiming lport fbdcaa7d-1c91-4655-ae63-1923519d1dea for this chassis. Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00200|binding|INFO|fbdcaa7d-1c91-4655-ae63-1923519d1dea: Claiming unknown Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost systemd-udevd[329794]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.107 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbdcaa7d-1c91-4655-ae63-1923519d1dea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.109 161932 INFO neutron.agent.ovn.metadata.agent [-] Port fbdcaa7d-1c91-4655-ae63-1923519d1dea in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.112 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:14 localhost podman[329768]: 2025-10-14 10:15:14.113906312 +0000 UTC m=+0.139399980 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.114 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b0732fce-52f0-4107-9f42-b4a8be511e31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:14 localhost podman[329768]: 2025-10-14 10:15:14.145845001 +0000 UTC m=+0.171338609 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00201|binding|INFO|Setting lport fbdcaa7d-1c91-4655-ae63-1923519d1dea ovn-installed in OVS Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00202|binding|INFO|Setting lport fbdcaa7d-1c91-4655-ae63-1923519d1dea up in Southbound Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:14.204 270389 INFO neutron.agent.linux.ip_lib [None req-7382307f-8265-48fc-ae59-6f22628c1998 - - - - - -] Device tape674359a-fd cannot be used as it has no MAC address#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost kernel: device tape674359a-fd entered promiscuous mode Oct 14 06:15:14 localhost systemd-udevd[329798]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:14 localhost NetworkManager[5972]: [1760436914.2345] manager: (tape674359a-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00203|binding|INFO|Claiming lport e674359a-fd2b-45e3-b84c-8b831ac8fef0 for this chassis. Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00204|binding|INFO|e674359a-fd2b-45e3-b84c-8b831ac8fef0: Claiming unknown Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.254 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d8959a5a-2c8e-4705-9c24-b22fa8f34b96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8959a5a-2c8e-4705-9c24-b22fa8f34b96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a840994a70374548889747682f4c0fa3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c1f6b3-97dd-4d72-a209-75719ef0ace0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=e674359a-fd2b-45e3-b84c-8b831ac8fef0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.256 161932 INFO neutron.agent.ovn.metadata.agent [-] Port e674359a-fd2b-45e3-b84c-8b831ac8fef0 in datapath d8959a5a-2c8e-4705-9c24-b22fa8f34b96 bound to our chassis#033[00m Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.258 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d8959a5a-2c8e-4705-9c24-b22fa8f34b96 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:14 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:14.259 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6b26ad55-5502-495a-993f-faa16b3bbaf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00205|binding|INFO|Setting lport e674359a-fd2b-45e3-b84c-8b831ac8fef0 ovn-installed in OVS Oct 14 06:15:14 localhost ovn_controller[156286]: 2025-10-14T10:15:14Z|00206|binding|INFO|Setting lport e674359a-fd2b-45e3-b84c-8b831ac8fef0 up in Southbound Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost nova_compute[295778]: 2025-10-14 10:15:14.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:14.360 2 INFO neutron.agent.securitygroups_rpc [None req-204ba6d8-4885-47e0-aefa-27975646d087 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e131 do_prune osdmap full prune enabled Oct 14 06:15:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e132 e132: 6 total, 6 up, 6 in Oct 14 06:15:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e132: 6 total, 6 up, 6 in Oct 14 06:15:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:15 localhost nova_compute[295778]: 2025-10-14 10:15:15.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:15 localhost podman[329889]: Oct 14 06:15:15 localhost podman[329889]: 2025-10-14 10:15:15.214902643 +0000 UTC m=+0.145962024 container create 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:15:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Oct 14 06:15:15 localhost podman[329889]: 2025-10-14 10:15:15.119415492 +0000 UTC m=+0.050474943 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:15.232 2 INFO neutron.agent.securitygroups_rpc [None req-091340ae-b286-4614-8179-3bb6eeb30cfa 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:15 localhost systemd[1]: Started libpod-conmon-478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f.scope. Oct 14 06:15:15 localhost systemd[1]: tmp-crun.zMwUWg.mount: Deactivated successfully. Oct 14 06:15:15 localhost systemd[1]: Started libcrun container. Oct 14 06:15:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efc9fbeef0b147e76a88f00251498e3f79518d5743fa84ef6e4e6a12acc2952/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:15 localhost podman[329924]: Oct 14 06:15:15 localhost podman[329889]: 2025-10-14 10:15:15.311466111 +0000 UTC m=+0.242525482 container init 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:15:15 localhost podman[329889]: 2025-10-14 10:15:15.321651892 +0000 UTC m=+0.252711263 container start 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:15 localhost dnsmasq[329942]: started, version 2.85 cachesize 150 Oct 14 06:15:15 localhost dnsmasq[329942]: DNS service limited to local subnets Oct 14 06:15:15 localhost dnsmasq[329942]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:15 localhost dnsmasq[329942]: warning: no upstream servers configured Oct 14 06:15:15 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:15 localhost podman[329924]: 2025-10-14 10:15:15.373854272 +0000 UTC m=+0.149043247 container create 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:15:15 localhost podman[329924]: 2025-10-14 10:15:15.279890191 +0000 UTC m=+0.055079166 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:15 localhost systemd[1]: Started libpod-conmon-85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe.scope. Oct 14 06:15:15 localhost systemd[1]: Started libcrun container. Oct 14 06:15:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6782814d967fccf2ebdc724a570780b1a4088bb8a23c2ea43a18bd0265c343d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:15 localhost podman[329924]: 2025-10-14 10:15:15.441391458 +0000 UTC m=+0.216580443 container init 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:15:15 localhost podman[329924]: 2025-10-14 10:15:15.45047544 +0000 UTC m=+0.225664375 container start 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:15 localhost dnsmasq[329948]: started, version 2.85 cachesize 150 Oct 14 06:15:15 localhost dnsmasq[329948]: DNS service limited to local subnets Oct 14 06:15:15 localhost dnsmasq[329948]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:15 localhost dnsmasq[329948]: warning: no upstream servers configured Oct 14 06:15:15 localhost dnsmasq-dhcp[329948]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:15 localhost dnsmasq[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/addn_hosts - 0 addresses Oct 14 06:15:15 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/host Oct 14 06:15:15 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/opts Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.487 270389 INFO neutron.agent.dhcp.agent [None req-927c7d32-f2a3-4474-a7fb-63348c4115d7 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.509 270389 INFO neutron.agent.dhcp.agent [None req-7382307f-8265-48fc-ae59-6f22628c1998 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:13Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fc064d58-6f19-4a1d-9ed3-1c6b83e5a65d, ip_allocation=immediate, mac_address=fa:16:3e:4e:e4:2b, name=tempest-PortsIpV6TestJSON-455121961, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:10Z, description=, dns_domain=, id=d8959a5a-2c8e-4705-9c24-b22fa8f34b96, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1128193368, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62181, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1661, status=ACTIVE, subnets=['894774ba-9d72-4672-bd05-ebe93d9a6ef9'], tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:13Z, vlan_transparent=None, network_id=d8959a5a-2c8e-4705-9c24-b22fa8f34b96, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['59283390-a499-4358-9f49-155fd8075ea9'], standard_attr_id=1679, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:13Z on network d8959a5a-2c8e-4705-9c24-b22fa8f34b96#033[00m Oct 14 06:15:15 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:15 localhost podman[329979]: 2025-10-14 10:15:15.64821348 +0000 UTC m=+0.061243530 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.652 270389 INFO neutron.agent.dhcp.agent [None req-8e58db7b-c3c7-42b4-8379-3963792069ff - - - - - -] DHCP configuration for ports {'5406f545-9462-4938-812c-42a10adb68f4'} is completed#033[00m Oct 14 06:15:15 localhost dnsmasq[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/addn_hosts - 1 addresses Oct 14 06:15:15 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/host Oct 14 06:15:15 localhost podman[329990]: 2025-10-14 10:15:15.673531344 +0000 UTC m=+0.054948623 container kill 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:15:15 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/opts Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.819 270389 INFO neutron.agent.dhcp.agent [None req-d6281322-0da2-4837-bebf-ab0e27269777 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:12Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8ca88b88-23c0-4c4b-8761-d4c3cb0a3cde, ip_allocation=immediate, mac_address=fa:16:3e:41:ab:29, name=tempest-NetworksTestDHCPv6-885051318, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=10, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['879eec51-4667-4292-bf89-e70b7f6c7e99'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:11Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1668, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:12Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:15.854 2 INFO neutron.agent.securitygroups_rpc [None req-22ce8ae9-860b-42ef-93b2-044fcbfa7c60 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.856 270389 INFO neutron.agent.dhcp.agent [None req-7382307f-8265-48fc-ae59-6f22628c1998 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:14Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=aeb47a12-9f1f-4046-be48-e36114a3254a, ip_allocation=immediate, mac_address=fa:16:3e:dc:94:eb, name=tempest-PortsIpV6TestJSON-545156189, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:10Z, description=, dns_domain=, id=d8959a5a-2c8e-4705-9c24-b22fa8f34b96, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1128193368, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62181, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1661, status=ACTIVE, subnets=['894774ba-9d72-4672-bd05-ebe93d9a6ef9'], tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:13Z, vlan_transparent=None, network_id=d8959a5a-2c8e-4705-9c24-b22fa8f34b96, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['59283390-a499-4358-9f49-155fd8075ea9'], standard_attr_id=1682, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:14Z on network d8959a5a-2c8e-4705-9c24-b22fa8f34b96#033[00m Oct 14 06:15:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:15.904 2 INFO neutron.agent.securitygroups_rpc [None req-c06e5ff4-6a19-4ed9-a361-5ee68eed5c1d 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:15 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:15.955 270389 INFO neutron.agent.dhcp.agent [None req-32cd9bfb-44d7-4af6-94a2-9c769cd0b2bb - - - - - -] DHCP configuration for ports {'fc064d58-6f19-4a1d-9ed3-1c6b83e5a65d', 'fbdcaa7d-1c91-4655-ae63-1923519d1dea', 'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:16 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:16 localhost podman[330053]: 2025-10-14 10:15:16.009105071 +0000 UTC m=+0.061898678 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:16 localhost dnsmasq[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/addn_hosts - 2 addresses Oct 14 06:15:16 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/host Oct 14 06:15:16 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/opts Oct 14 06:15:16 localhost podman[330068]: 2025-10-14 10:15:16.049908657 +0000 UTC m=+0.050451943 container kill 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:16.194 270389 INFO neutron.agent.dhcp.agent [None req-d6281322-0da2-4837-bebf-ab0e27269777 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:15Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2db88be9-8d6f-4f7f-8e4c-a8ef3d84fce7, ip_allocation=immediate, mac_address=fa:16:3e:2e:5f:4d, name=tempest-NetworksTestDHCPv6-80615061, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=12, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['7a8179b5-263c-447e-9b8b-4f7dccdc8681'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:14Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1690, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:15Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:16.281 270389 INFO neutron.agent.dhcp.agent [None req-48b92f0b-fc15-40f6-8386-4e83c37b600e - - - - - -] DHCP configuration for ports {'aeb47a12-9f1f-4046-be48-e36114a3254a', '8ca88b88-23c0-4c4b-8761-d4c3cb0a3cde'} is completed#033[00m Oct 14 06:15:16 localhost podman[330125]: 2025-10-14 10:15:16.388808853 +0000 UTC m=+0.060658075 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:15:16 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:15:16 localhost dnsmasq[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/addn_hosts - 1 addresses Oct 14 06:15:16 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/host Oct 14 06:15:16 localhost podman[330145]: 2025-10-14 10:15:16.464444895 +0000 UTC m=+0.072201922 container kill 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:15:16 localhost dnsmasq-dhcp[329948]: read /var/lib/neutron/dhcp/d8959a5a-2c8e-4705-9c24-b22fa8f34b96/opts Oct 14 06:15:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:16.647 270389 INFO neutron.agent.dhcp.agent [None req-4072ca82-6bb5-4a3d-be38-46e49e615e8d - - - - - -] DHCP configuration for ports {'2db88be9-8d6f-4f7f-8e4c-a8ef3d84fce7'} is completed#033[00m Oct 14 06:15:16 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:16 localhost podman[330195]: 2025-10-14 10:15:16.779159987 +0000 UTC m=+0.059230896 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:16 localhost dnsmasq[329948]: exiting on receipt of SIGTERM Oct 14 06:15:16 localhost podman[330229]: 2025-10-14 10:15:16.888581128 +0000 UTC m=+0.038450963 container kill 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:15:16 localhost systemd[1]: libpod-85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe.scope: Deactivated successfully. Oct 14 06:15:16 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:16.901 2 INFO neutron.agent.securitygroups_rpc [None req-a4deacba-ad68-44d1-8932-5e1de4d92966 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:16 localhost podman[330245]: 2025-10-14 10:15:16.935810645 +0000 UTC m=+0.038397163 container died 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:15:16 localhost podman[330245]: 2025-10-14 10:15:16.958486008 +0000 UTC m=+0.061072486 container cleanup 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:15:16 localhost systemd[1]: libpod-conmon-85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe.scope: Deactivated successfully. Oct 14 06:15:16 localhost podman[330252]: 2025-10-14 10:15:16.984832739 +0000 UTC m=+0.071139034 container remove 85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d8959a5a-2c8e-4705-9c24-b22fa8f34b96, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:15:16 localhost ovn_controller[156286]: 2025-10-14T10:15:16Z|00207|binding|INFO|Releasing lport e674359a-fd2b-45e3-b84c-8b831ac8fef0 from this chassis (sb_readonly=0) Oct 14 06:15:16 localhost nova_compute[295778]: 2025-10-14 10:15:16.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:16 localhost ovn_controller[156286]: 2025-10-14T10:15:16Z|00208|binding|INFO|Setting lport e674359a-fd2b-45e3-b84c-8b831ac8fef0 down in Southbound Oct 14 06:15:16 localhost kernel: device tape674359a-fd left promiscuous mode Oct 14 06:15:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:17.007 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d8959a5a-2c8e-4705-9c24-b22fa8f34b96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8959a5a-2c8e-4705-9c24-b22fa8f34b96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a840994a70374548889747682f4c0fa3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c1f6b3-97dd-4d72-a209-75719ef0ace0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=e674359a-fd2b-45e3-b84c-8b831ac8fef0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:17.008 161932 INFO neutron.agent.ovn.metadata.agent [-] Port e674359a-fd2b-45e3-b84c-8b831ac8fef0 in datapath d8959a5a-2c8e-4705-9c24-b22fa8f34b96 unbound from our chassis#033[00m Oct 14 06:15:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:17.009 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d8959a5a-2c8e-4705-9c24-b22fa8f34b96 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:17.010 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[5b15b77d-57b5-4c5c-b6bd-f51abd102d1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:17 localhost nova_compute[295778]: 2025-10-14 10:15:17.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:17 localhost dnsmasq[329942]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:17 localhost podman[330292]: 2025-10-14 10:15:17.109977818 +0000 UTC m=+0.038757652 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Oct 14 06:15:17 localhost systemd[1]: tmp-crun.9jmQBh.mount: Deactivated successfully. Oct 14 06:15:17 localhost systemd[1]: var-lib-containers-storage-overlay-6782814d967fccf2ebdc724a570780b1a4088bb8a23c2ea43a18bd0265c343d8-merged.mount: Deactivated successfully. Oct 14 06:15:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85b2f219782fa8851409a4ff96a8794d14540f4f1a6e0151f623addbedcc99fe-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:17.245 270389 INFO neutron.agent.dhcp.agent [None req-6238ba31-d4e4-4abe-92f7-c8690732ae75 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:17.245 270389 INFO neutron.agent.dhcp.agent [None req-6238ba31-d4e4-4abe-92f7-c8690732ae75 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:17.246 270389 INFO neutron.agent.dhcp.agent [None req-6238ba31-d4e4-4abe-92f7-c8690732ae75 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:17 localhost systemd[1]: run-netns-qdhcp\x2dd8959a5a\x2d2c8e\x2d4705\x2d9c24\x2db22fa8f34b96.mount: Deactivated successfully. Oct 14 06:15:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:17.858 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:17 localhost nova_compute[295778]: 2025-10-14 10:15:17.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:18.009 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:15:18 localhost podman[330330]: 2025-10-14 10:15:18.176820511 +0000 UTC m=+0.061441386 container kill 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:18 localhost dnsmasq[329942]: exiting on receipt of SIGTERM Oct 14 06:15:18 localhost systemd[1]: libpod-478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f.scope: Deactivated successfully. Oct 14 06:15:18 localhost podman[330343]: 2025-10-14 10:15:18.255586296 +0000 UTC m=+0.057165551 container died 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:15:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:18 localhost systemd[1]: var-lib-containers-storage-overlay-2efc9fbeef0b147e76a88f00251498e3f79518d5743fa84ef6e4e6a12acc2952-merged.mount: Deactivated successfully. Oct 14 06:15:18 localhost podman[330343]: 2025-10-14 10:15:18.316703142 +0000 UTC m=+0.118282347 container remove 478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:18 localhost nova_compute[295778]: 2025-10-14 10:15:18.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:18 localhost kernel: device tapfbdcaa7d-1c left promiscuous mode Oct 14 06:15:18 localhost ovn_controller[156286]: 2025-10-14T10:15:18Z|00209|binding|INFO|Releasing lport fbdcaa7d-1c91-4655-ae63-1923519d1dea from this chassis (sb_readonly=0) Oct 14 06:15:18 localhost ovn_controller[156286]: 2025-10-14T10:15:18Z|00210|binding|INFO|Setting lport fbdcaa7d-1c91-4655-ae63-1923519d1dea down in Southbound Oct 14 06:15:18 localhost nova_compute[295778]: 2025-10-14 10:15:18.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:18 localhost systemd[1]: libpod-conmon-478e6225e51b7a9675e2097a9cbea61e8b946430a5bcd9f905b226f0542d2d4f.scope: Deactivated successfully. Oct 14 06:15:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:18.361 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbdcaa7d-1c91-4655-ae63-1923519d1dea) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:18.363 161932 INFO neutron.agent.ovn.metadata.agent [-] Port fbdcaa7d-1c91-4655-ae63-1923519d1dea in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:18.366 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:18.367 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[3018598c-10cd-437d-8ad4-81c417992dfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:18 localhost nova_compute[295778]: 2025-10-14 10:15:18.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:18 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:15:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:15:18 localhost podman[330371]: 2025-10-14 10:15:18.94920176 +0000 UTC m=+0.091034953 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Oct 14 06:15:18 localhost podman[330371]: 2025-10-14 10:15:18.985263109 +0000 UTC m=+0.127096352 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:18 localhost podman[330372]: 2025-10-14 10:15:18.996128209 +0000 UTC m=+0.134363236 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:15:18 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:15:19 localhost podman[330372]: 2025-10-14 10:15:19.038405553 +0000 UTC m=+0.176640630 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:15:19 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:15:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Oct 14 06:15:19 localhost systemd[1]: tmp-crun.eVvO2L.mount: Deactivated successfully. Oct 14 06:15:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:19.602 2 INFO neutron.agent.securitygroups_rpc [None req-2e0a1545-8dce-4395-9eda-96b0915ada53 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:19 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:19.672 270389 INFO neutron.agent.linux.ip_lib [None req-d949c622-993a-4220-87eb-4946b6d68697 - - - - - -] Device tap182390ef-8a cannot be used as it has no MAC address#033[00m Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:19 localhost kernel: device tap182390ef-8a entered promiscuous mode Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:19 localhost NetworkManager[5972]: [1760436919.7043] manager: (tap182390ef-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Oct 14 06:15:19 localhost ovn_controller[156286]: 2025-10-14T10:15:19Z|00211|binding|INFO|Claiming lport 182390ef-8add-4244-9350-e67a185ecec6 for this chassis. Oct 14 06:15:19 localhost ovn_controller[156286]: 2025-10-14T10:15:19Z|00212|binding|INFO|182390ef-8add-4244-9350-e67a185ecec6: Claiming unknown Oct 14 06:15:19 localhost systemd-udevd[330423]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e132 do_prune osdmap full prune enabled Oct 14 06:15:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:19.726 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=182390ef-8add-4244-9350-e67a185ecec6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:19.728 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 182390ef-8add-4244-9350-e67a185ecec6 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:19.731 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 e133: 6 total, 6 up, 6 in Oct 14 06:15:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:19.732 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c36bb951-25f9-4030-adb8-4ab81ed8252f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:19 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e133: 6 total, 6 up, 6 in Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:19 localhost ovn_controller[156286]: 2025-10-14T10:15:19Z|00213|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 ovn-installed in OVS Oct 14 06:15:19 localhost ovn_controller[156286]: 2025-10-14T10:15:19Z|00214|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 up in Southbound Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:19 localhost nova_compute[295778]: 2025-10-14 10:15:19.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:20 localhost podman[330478]: Oct 14 06:15:20 localhost podman[330478]: 2025-10-14 10:15:20.626884453 +0000 UTC m=+0.084317194 container create 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:15:20 localhost systemd[1]: Started libpod-conmon-49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d.scope. Oct 14 06:15:20 localhost systemd[1]: tmp-crun.8bzEhv.mount: Deactivated successfully. Oct 14 06:15:20 localhost podman[330478]: 2025-10-14 10:15:20.587371592 +0000 UTC m=+0.044804383 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:20 localhost systemd[1]: Started libcrun container. Oct 14 06:15:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31396392576c738567f222982419ed743e3f13fcd26bfed75be1f184e61f9c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:20 localhost podman[330478]: 2025-10-14 10:15:20.715056348 +0000 UTC m=+0.172489089 container init 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:20 localhost podman[330478]: 2025-10-14 10:15:20.723354469 +0000 UTC m=+0.180787210 container start 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:15:20 localhost dnsmasq[330497]: started, version 2.85 cachesize 150 Oct 14 06:15:20 localhost dnsmasq[330497]: DNS service limited to local subnets Oct 14 06:15:20 localhost dnsmasq[330497]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:20 localhost dnsmasq[330497]: warning: no upstream servers configured Oct 14 06:15:20 localhost dnsmasq-dhcp[330497]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:20 localhost dnsmasq[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:20 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:20 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:20.782 270389 INFO neutron.agent.dhcp.agent [None req-d949c622-993a-4220-87eb-4946b6d68697 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:19Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9a084941-85bf-4980-81f2-3f64564a0e8f, ip_allocation=immediate, mac_address=fa:16:3e:ca:d7:46, name=tempest-NetworksTestDHCPv6-495001824, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=14, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['081b6a3f-15c4-4329-ba27-38a06c2f8d29'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:18Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1715, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:19Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:20.878 2 INFO neutron.agent.securitygroups_rpc [None req-36685efb-a3c2-46cc-8647-4994e9dc66ed 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.925 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.926 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.927 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:15:20 localhost nova_compute[295778]: 2025-10-14 10:15:20.927 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:15:20 localhost dnsmasq[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:20 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:20 localhost podman[330515]: 2025-10-14 10:15:20.953935363 +0000 UTC m=+0.053706450 container kill 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:15:20 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:20.973 270389 INFO neutron.agent.dhcp.agent [None req-639b5800-1e2a-4d01-a49a-ac65b4d7a575 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:21 localhost kernel: device tap182390ef-8a left promiscuous mode Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00215|binding|INFO|Releasing lport 182390ef-8add-4244-9350-e67a185ecec6 from this chassis (sb_readonly=0) Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00216|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 down in Southbound Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.132 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=182390ef-8add-4244-9350-e67a185ecec6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.135 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 182390ef-8add-4244-9350-e67a185ecec6 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.138 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.139 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[613c14ec-f56c-425a-8f27-4da4e8e38f87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.188 270389 INFO neutron.agent.dhcp.agent [None req-f8840188-bbc5-4228-b0e4-290aea93f9ed - - - - - -] DHCP configuration for ports {'9a084941-85bf-4980-81f2-3f64564a0e8f'} is completed#033[00m Oct 14 06:15:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.0 KiB/s wr, 44 op/s Oct 14 06:15:21 localhost dnsmasq[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:21 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:21 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:21 localhost podman[330576]: 2025-10-14 10:15:21.306650017 +0000 UTC m=+0.057616634 container kill 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent [None req-d949c622-993a-4220-87eb-4946b6d68697 - - - - - -] Unable to reload_allocations dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap182390ef-8a not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap182390ef-8a not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.332 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:15:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2506028605' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.413 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:15:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:21.442 270389 INFO neutron.agent.linux.ip_lib [None req-fc20895b-0468-49da-84e9-672f4b144fb7 - - - - - -] Device tap603e7248-17 cannot be used as it has no MAC address#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost kernel: device tap603e7248-17 entered promiscuous mode Oct 14 06:15:21 localhost NetworkManager[5972]: [1760436921.4788] manager: (tap603e7248-17): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00217|binding|INFO|Claiming lport 603e7248-173f-4d7a-9c09-0d9bc9b4624e for this chassis. Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00218|binding|INFO|603e7248-173f-4d7a-9c09-0d9bc9b4624e: Claiming unknown Oct 14 06:15:21 localhost systemd-udevd[330425]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.492 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ed9fc40f-a480-44f3-8674-2504cda1a2ad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed9fc40f-a480-44f3-8674-2504cda1a2ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e6e5d2b322d4a35bd40e5b22dbee82d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=708441c7-6de6-4a96-9516-bf9d4722d80d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=603e7248-173f-4d7a-9c09-0d9bc9b4624e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.495 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 603e7248-173f-4d7a-9c09-0d9bc9b4624e in datapath ed9fc40f-a480-44f3-8674-2504cda1a2ad bound to our chassis#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.499 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ed9fc40f-a480-44f3-8674-2504cda1a2ad or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:21.500 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b78d749c-a423-464f-8bf1-a25da86a37a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00219|binding|INFO|Setting lport 603e7248-173f-4d7a-9c09-0d9bc9b4624e ovn-installed in OVS Oct 14 06:15:21 localhost ovn_controller[156286]: 2025-10-14T10:15:21Z|00220|binding|INFO|Setting lport 603e7248-173f-4d7a-9c09-0d9bc9b4624e up in Southbound Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.659 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.661 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11465MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.661 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.661 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.732 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.733 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:15:21 localhost nova_compute[295778]: 2025-10-14 10:15:21.765 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:15:22 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:22.045 2 INFO neutron.agent.securitygroups_rpc [None req-c443f879-fd39-41c0-b2ca-b90cd4f8dede 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:15:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3993334970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:15:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:15:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3993334970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:15:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:15:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4182752298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.244 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.249 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.266 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.267 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.268 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:15:22 localhost podman[330677]: Oct 14 06:15:22 localhost podman[330677]: 2025-10-14 10:15:22.417859609 +0000 UTC m=+0.089343828 container create 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:15:22 localhost systemd[1]: Started libpod-conmon-1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553.scope. Oct 14 06:15:22 localhost podman[330677]: 2025-10-14 10:15:22.373446188 +0000 UTC m=+0.044930447 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:22 localhost systemd[1]: Started libcrun container. Oct 14 06:15:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c55f420f1eb15759f0a725be88d7e1901d1a9d87995a525e4d7c0e197f60b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:22 localhost podman[330677]: 2025-10-14 10:15:22.491679143 +0000 UTC m=+0.163163332 container init 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:22 localhost podman[330677]: 2025-10-14 10:15:22.501113164 +0000 UTC m=+0.172597383 container start 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:15:22 localhost dnsmasq[330695]: started, version 2.85 cachesize 150 Oct 14 06:15:22 localhost dnsmasq[330695]: DNS service limited to local subnets Oct 14 06:15:22 localhost dnsmasq[330695]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:22 localhost dnsmasq[330695]: warning: no upstream servers configured Oct 14 06:15:22 localhost dnsmasq-dhcp[330695]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:15:22 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 0 addresses Oct 14 06:15:22 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:22 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.543 270389 INFO neutron.agent.dhcp.agent [None req-3bbc15e3-85ce-408d-8d05-6f357cd71677 - - - - - -] Synchronizing state#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.658 270389 INFO neutron.agent.dhcp.agent [None req-a66ca587-93b0-4d45-994f-c2298d363e77 - - - - - -] DHCP configuration for ports {'0e76668d-f259-48ce-be33-2c7738fe2ce1'} is completed#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.754 270389 INFO neutron.agent.dhcp.agent [None req-57577830-77f6-456f-a11e-fdb4def335ac - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.758 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.758 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.759 270389 INFO neutron.agent.dhcp.agent [-] Starting network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.759 270389 INFO neutron.agent.dhcp.agent [-] Finished network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.759 270389 INFO neutron.agent.dhcp.agent [-] Starting network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.760 270389 INFO neutron.agent.dhcp.agent [-] Finished network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.762 270389 INFO neutron.agent.dhcp.agent [None req-57577830-77f6-456f-a11e-fdb4def335ac - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:22.903 270389 INFO neutron.agent.dhcp.agent [None req-ea82082c-b3c6-4c62-a930-a46c8b56c29a - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:22 localhost nova_compute[295778]: 2025-10-14 10:15:22.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:23 localhost dnsmasq[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:23 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:23 localhost podman[330711]: 2025-10-14 10:15:23.018519599 +0000 UTC m=+0.064769884 container kill 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:23 localhost dnsmasq-dhcp[330497]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent [None req-c80801c8-cd00-4475-9328-5b92929027f8 - - - - - -] Unable to reload_allocations dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap182390ef-8a not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap182390ef-8a not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.044 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.047 270389 INFO neutron.agent.dhcp.agent [None req-57577830-77f6-456f-a11e-fdb4def335ac - - - - - -] Synchronizing state#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.180 270389 INFO neutron.agent.dhcp.agent [None req-e5636e31-65d1-4188-af2a-fa10f308b4b6 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '182390ef-8add-4244-9350-e67a185ecec6'} is completed#033[00m Oct 14 06:15:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 39 op/s Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.312 270389 INFO neutron.agent.dhcp.agent [None req-877b5710-4142-4bc2-a9f0-400b14e72efb - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.313 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.316 270389 INFO neutron.agent.dhcp.agent [-] Starting network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.316 270389 INFO neutron.agent.dhcp.agent [-] Finished network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.317 270389 INFO neutron.agent.dhcp.agent [-] Starting network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:23.317 270389 INFO neutron.agent.dhcp.agent [-] Finished network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:23 localhost dnsmasq[330497]: exiting on receipt of SIGTERM Oct 14 06:15:23 localhost podman[330741]: 2025-10-14 10:15:23.475254569 +0000 UTC m=+0.055618689 container kill 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:23 localhost systemd[1]: tmp-crun.dLsDMw.mount: Deactivated successfully. Oct 14 06:15:23 localhost systemd[1]: libpod-49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d.scope: Deactivated successfully. Oct 14 06:15:23 localhost podman[330753]: 2025-10-14 10:15:23.541623165 +0000 UTC m=+0.055206050 container died 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:23 localhost podman[330753]: 2025-10-14 10:15:23.576609847 +0000 UTC m=+0.090192682 container cleanup 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:23 localhost systemd[1]: libpod-conmon-49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d.scope: Deactivated successfully. Oct 14 06:15:23 localhost podman[330755]: 2025-10-14 10:15:23.600853531 +0000 UTC m=+0.104038099 container remove 49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:15:23 localhost systemd[1]: var-lib-containers-storage-overlay-c31396392576c738567f222982419ed743e3f13fcd26bfed75be1f184e61f9c4-merged.mount: Deactivated successfully. Oct 14 06:15:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49bb37d35629b416bbe6e0e412f7c7a57809a2400cf8c134668aa088f316441d-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:23 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:23.805 2 INFO neutron.agent.securitygroups_rpc [None req-4115f1a8-5df7-4a09-a802-bec7f8a3f06c b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:24.284 270389 INFO neutron.agent.linux.ip_lib [None req-255de5fc-93ea-417c-94f1-6f54981039c7 - - - - - -] Device tap182390ef-8a cannot be used as it has no MAC address#033[00m Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost kernel: device tap182390ef-8a entered promiscuous mode Oct 14 06:15:24 localhost NetworkManager[5972]: [1760436924.3508] manager: (tap182390ef-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Oct 14 06:15:24 localhost ovn_controller[156286]: 2025-10-14T10:15:24Z|00221|binding|INFO|Claiming lport 182390ef-8add-4244-9350-e67a185ecec6 for this chassis. Oct 14 06:15:24 localhost ovn_controller[156286]: 2025-10-14T10:15:24Z|00222|binding|INFO|182390ef-8add-4244-9350-e67a185ecec6: Claiming unknown Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost ovn_controller[156286]: 2025-10-14T10:15:24Z|00223|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 ovn-installed in OVS Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost ovn_controller[156286]: 2025-10-14T10:15:24Z|00224|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 up in Southbound Oct 14 06:15:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:24.364 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=182390ef-8add-4244-9350-e67a185ecec6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:24.366 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 182390ef-8add-4244-9350-e67a185ecec6 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:24.368 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:24.369 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e8960424-65d8-4a00-87bd-28a97d22409c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost nova_compute[295778]: 2025-10-14 10:15:24.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:15:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:15:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:15:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:15:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:15:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:15:24 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 23add917-1b1e-4d54-a3a5-12a883fb59c7 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:15:24 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 23add917-1b1e-4d54-a3a5-12a883fb59c7 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:15:24 localhost ceph-mgr[300442]: [progress INFO root] Completed event 23add917-1b1e-4d54-a3a5-12a883fb59c7 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:15:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:15:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:15:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:15:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:15:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:24 localhost podman[330895]: 2025-10-14 10:15:24.743546171 +0000 UTC m=+0.095750678 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, org.label-schema.schema-version=1.0) Oct 14 06:15:24 localhost podman[330895]: 2025-10-14 10:15:24.773922689 +0000 UTC m=+0.126127156 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=iscsid) Oct 14 06:15:24 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:15:24 localhost podman[330896]: 2025-10-14 10:15:24.85136779 +0000 UTC m=+0.201345998 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 06:15:24 localhost podman[330896]: 2025-10-14 10:15:24.862068105 +0000 UTC m=+0.212046293 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 14 06:15:24 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:15:25 localhost nova_compute[295778]: 2025-10-14 10:15:25.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:25 localhost podman[330963]: Oct 14 06:15:25 localhost podman[330963]: 2025-10-14 10:15:25.197054396 +0000 UTC m=+0.093955960 container create aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:15:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 67 op/s Oct 14 06:15:25 localhost systemd[1]: Started libpod-conmon-aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b.scope. Oct 14 06:15:25 localhost systemd[1]: Started libcrun container. Oct 14 06:15:25 localhost podman[330963]: 2025-10-14 10:15:25.15358927 +0000 UTC m=+0.050490864 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f849cae7f6dcb8ccf7a83808720033ed3c02cfa471f215d5d312fa7083b6bad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:25 localhost podman[330963]: 2025-10-14 10:15:25.26748402 +0000 UTC m=+0.164385564 container init aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:25 localhost podman[330963]: 2025-10-14 10:15:25.277082005 +0000 UTC m=+0.173983549 container start aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:15:25 localhost dnsmasq[330982]: started, version 2.85 cachesize 150 Oct 14 06:15:25 localhost dnsmasq[330982]: DNS service limited to local subnets Oct 14 06:15:25 localhost dnsmasq[330982]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:25 localhost dnsmasq[330982]: warning: no upstream servers configured Oct 14 06:15:25 localhost dnsmasq-dhcp[330982]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:25 localhost dnsmasq[330982]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:25 localhost dnsmasq-dhcp[330982]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:25 localhost dnsmasq-dhcp[330982]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:25.317 2 INFO neutron.agent.securitygroups_rpc [None req-0c22be32-228c-44ef-8a0f-96c9a8ac58ba b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:25.335 270389 INFO neutron.agent.dhcp.agent [None req-255de5fc-93ea-417c-94f1-6f54981039c7 - - - - - -] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:25.336 270389 INFO neutron.agent.dhcp.agent [None req-877b5710-4142-4bc2-a9f0-400b14e72efb - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:25.528 270389 INFO neutron.agent.dhcp.agent [None req-5d514a7f-875f-45e8-bdc2-1adfd1158451 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '182390ef-8add-4244-9350-e67a185ecec6'} is completed#033[00m Oct 14 06:15:25 localhost podman[330999]: 2025-10-14 10:15:25.59301986 +0000 UTC m=+0.050023231 container kill aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:25 localhost dnsmasq[330982]: exiting on receipt of SIGTERM Oct 14 06:15:25 localhost systemd[1]: libpod-aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b.scope: Deactivated successfully. Oct 14 06:15:25 localhost podman[331011]: 2025-10-14 10:15:25.660258439 +0000 UTC m=+0.055278931 container died aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:25 localhost podman[331011]: 2025-10-14 10:15:25.701840115 +0000 UTC m=+0.096860577 container cleanup aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:15:25 localhost systemd[1]: libpod-conmon-aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b.scope: Deactivated successfully. Oct 14 06:15:25 localhost systemd[1]: var-lib-containers-storage-overlay-7f849cae7f6dcb8ccf7a83808720033ed3c02cfa471f215d5d312fa7083b6bad-merged.mount: Deactivated successfully. Oct 14 06:15:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:25 localhost podman[331013]: 2025-10-14 10:15:25.746681068 +0000 UTC m=+0.138964577 container remove aabab9c6518b005fd5b9ac17c563c5543e53f42401525d630d8d9ff81d6fcb2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:15:25 localhost nova_compute[295778]: 2025-10-14 10:15:25.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:25 localhost ovn_controller[156286]: 2025-10-14T10:15:25Z|00225|binding|INFO|Releasing lport 182390ef-8add-4244-9350-e67a185ecec6 from this chassis (sb_readonly=0) Oct 14 06:15:25 localhost ovn_controller[156286]: 2025-10-14T10:15:25Z|00226|binding|INFO|Setting lport 182390ef-8add-4244-9350-e67a185ecec6 down in Southbound Oct 14 06:15:25 localhost kernel: device tap182390ef-8a left promiscuous mode Oct 14 06:15:25 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:25.774 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=182390ef-8add-4244-9350-e67a185ecec6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:25 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:25.776 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 182390ef-8add-4244-9350-e67a185ecec6 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:25 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:25.778 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:25 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:25.779 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6eac41-8e85-49a8-b976-620ffcb8bbfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:25 localhost nova_compute[295778]: 2025-10-14 10:15:25.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:26.742 270389 INFO neutron.agent.linux.ip_lib [None req-c7da93fb-9caa-4469-9ee8-3f24b6c781db - - - - - -] Device tap397d7ff0-06 cannot be used as it has no MAC address#033[00m Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost kernel: device tap397d7ff0-06 entered promiscuous mode Oct 14 06:15:26 localhost NetworkManager[5972]: [1760436926.7953] manager: (tap397d7ff0-06): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Oct 14 06:15:26 localhost ovn_controller[156286]: 2025-10-14T10:15:26Z|00227|binding|INFO|Claiming lport 397d7ff0-06f9-4819-8263-d27501006f0b for this chassis. Oct 14 06:15:26 localhost ovn_controller[156286]: 2025-10-14T10:15:26Z|00228|binding|INFO|397d7ff0-06f9-4819-8263-d27501006f0b: Claiming unknown Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost ovn_controller[156286]: 2025-10-14T10:15:26Z|00229|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b up in Southbound Oct 14 06:15:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:26.806 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=397d7ff0-06f9-4819-8263-d27501006f0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:26.809 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 397d7ff0-06f9-4819-8263-d27501006f0b in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:26 localhost ovn_controller[156286]: 2025-10-14T10:15:26Z|00230|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b ovn-installed in OVS Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:26.813 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:26.815 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[07a34e58-b041-416c-8411-d197931ad5dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:26 localhost nova_compute[295778]: 2025-10-14 10:15:26.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 67 op/s Oct 14 06:15:27 localhost podman[331106]: Oct 14 06:15:27 localhost podman[331106]: 2025-10-14 10:15:27.691804036 +0000 UTC m=+0.090929130 container create b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:27 localhost systemd[1]: Started libpod-conmon-b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a.scope. Oct 14 06:15:27 localhost systemd[1]: Started libcrun container. Oct 14 06:15:27 localhost podman[331106]: 2025-10-14 10:15:27.647751434 +0000 UTC m=+0.046876558 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da488ab77a1ac6d0b11a91216b7451bf09b09ce2572518872db77c12e1292b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:27 localhost podman[331106]: 2025-10-14 10:15:27.757197226 +0000 UTC m=+0.156322370 container init b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:15:27 localhost podman[331106]: 2025-10-14 10:15:27.766537604 +0000 UTC m=+0.165662708 container start b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:15:27 localhost dnsmasq[331124]: started, version 2.85 cachesize 150 Oct 14 06:15:27 localhost dnsmasq[331124]: DNS service limited to local subnets Oct 14 06:15:27 localhost dnsmasq[331124]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:27 localhost dnsmasq[331124]: warning: no upstream servers configured Oct 14 06:15:27 localhost dnsmasq-dhcp[331124]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:27 localhost dnsmasq[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:27 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:27 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:27 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:27.824 270389 INFO neutron.agent.dhcp.agent [None req-c7da93fb-9caa-4469-9ee8-3f24b6c781db - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:26Z, description=, device_id=62ed2e08-845e-4aec-8b6b-ea88be396032, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e8fadbb1-7fc4-456a-a1ed-199804acaffb, ip_allocation=immediate, mac_address=fa:16:3e:67:3b:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['07ad4b4e-660d-46ee-9a2d-68e28c5a1b53'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:25Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=False, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1759, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:27Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:27 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:27.889 270389 INFO neutron.agent.dhcp.agent [None req-7e832032-d994-4e41-bc44-ff06bd65d95c - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:27 localhost nova_compute[295778]: 2025-10-14 10:15:27.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:28 localhost dnsmasq[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:28 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:28 localhost podman[331142]: 2025-10-14 10:15:28.031581395 +0000 UTC m=+0.060534852 container kill b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:28 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:28 localhost nova_compute[295778]: 2025-10-14 10:15:28.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:28 localhost kernel: device tap397d7ff0-06 left promiscuous mode Oct 14 06:15:28 localhost ovn_controller[156286]: 2025-10-14T10:15:28Z|00231|binding|INFO|Releasing lport 397d7ff0-06f9-4819-8263-d27501006f0b from this chassis (sb_readonly=0) Oct 14 06:15:28 localhost ovn_controller[156286]: 2025-10-14T10:15:28Z|00232|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b down in Southbound Oct 14 06:15:28 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:28.214 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=397d7ff0-06f9-4819-8263-d27501006f0b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:28 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:28.216 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 397d7ff0-06f9-4819-8263-d27501006f0b in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:28 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:28.218 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:28 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:28.224 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[14e2b366-14cf-48e4-96c3-ac08866759ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:28 localhost nova_compute[295778]: 2025-10-14 10:15:28.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:28 localhost nova_compute[295778]: 2025-10-14 10:15:28.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:28 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:28.248 270389 INFO neutron.agent.dhcp.agent [None req-cdecddf1-7bd1-498a-a100-c4ae7719ecb7 - - - - - -] DHCP configuration for ports {'e8fadbb1-7fc4-456a-a1ed-199804acaffb'} is completed#033[00m Oct 14 06:15:28 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:28.323 2 INFO neutron.agent.securitygroups_rpc [None req-faa6c76b-98b5-4cd2-b87b-72ca8f02394e 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:28 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:28.557 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:28Z, description=, device_id=e4087971-46b9-47a9-bed6-ec82f44e073a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=42e5317d-bd24-4fab-bc11-fa03e3cda433, ip_allocation=immediate, mac_address=fa:16:3e:eb:ec:5e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:19Z, description=, dns_domain=, id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-105903502, port_security_enabled=True, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1718, status=ACTIVE, subnets=['578c7a66-ad97-4e43-9222-8bdb3cb55dcf'], tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:20Z, vlan_transparent=None, network_id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, port_security_enabled=False, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1766, status=DOWN, tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:28Z on network ed9fc40f-a480-44f3-8674-2504cda1a2ad#033[00m Oct 14 06:15:28 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 1 addresses Oct 14 06:15:28 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:28 localhost podman[331181]: 2025-10-14 10:15:28.757881607 +0000 UTC m=+0.058805295 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:15:28 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.022 270389 INFO neutron.agent.dhcp.agent [None req-f2817d66-e8cf-48e6-bf56-a492648eefc2 - - - - - -] DHCP configuration for ports {'42e5317d-bd24-4fab-bc11-fa03e3cda433'} is completed#033[00m Oct 14 06:15:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 67 op/s Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.269 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.269 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.270 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.270 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.271 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:15:29 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:15:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:15:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.706 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:26Z, description=, device_id=62ed2e08-845e-4aec-8b6b-ea88be396032, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e8fadbb1-7fc4-456a-a1ed-199804acaffb, ip_allocation=immediate, mac_address=fa:16:3e:67:3b:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['07ad4b4e-660d-46ee-9a2d-68e28c5a1b53'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:25Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=False, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1759, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:27Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:29 localhost podman[331219]: 2025-10-14 10:15:29.89879495 +0000 UTC m=+0.063308985 container kill b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:29 localhost dnsmasq[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:29 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:29 localhost dnsmasq-dhcp[331124]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap397d7ff0-06 not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap397d7ff0-06 not found in namespace qdhcp-74049e43-4aa7-4318-9233-a58980c3495b. Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.922 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:29.926 270389 INFO neutron.agent.dhcp.agent [None req-877b5710-4142-4bc2-a9f0-400b14e72efb - - - - - -] Synchronizing state#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.943 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:15:29 localhost nova_compute[295778]: 2025-10-14 10:15:29.944 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.136 270389 INFO neutron.agent.dhcp.agent [None req-db677039-12b7-445c-aacc-fa0a075330dd - - - - - -] DHCP configuration for ports {'e8fadbb1-7fc4-456a-a1ed-199804acaffb'} is completed#033[00m Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.303 270389 INFO neutron.agent.dhcp.agent [None req-e304a4ba-3844-4959-8265-3c8915eb9b3a - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.304 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.307 270389 INFO neutron.agent.dhcp.agent [-] Starting network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.308 270389 INFO neutron.agent.dhcp.agent [-] Finished network 7c69ea3e-ed70-4a0e-a9f9-cd75740e37fa dhcp configuration#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.309 270389 INFO neutron.agent.dhcp.agent [-] Starting network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.309 270389 INFO neutron.agent.dhcp.agent [-] Finished network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:15:30 localhost dnsmasq[331124]: exiting on receipt of SIGTERM Oct 14 06:15:30 localhost podman[331249]: 2025-10-14 10:15:30.493665636 +0000 UTC m=+0.074069932 container kill b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 06:15:30 localhost systemd[1]: libpod-b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a.scope: Deactivated successfully. Oct 14 06:15:30 localhost podman[331261]: 2025-10-14 10:15:30.574969129 +0000 UTC m=+0.065965886 container died b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:15:30 localhost podman[246584]: time="2025-10-14T10:15:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:15:30 localhost podman[331261]: 2025-10-14 10:15:30.660244487 +0000 UTC m=+0.151241214 container cleanup b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:30 localhost podman[246584]: @ - - [14/Oct/2025:10:15:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149945 "" "Go-http-client/1.1" Oct 14 06:15:30 localhost systemd[1]: libpod-conmon-b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a.scope: Deactivated successfully. Oct 14 06:15:30 localhost podman[331263]: 2025-10-14 10:15:30.732953632 +0000 UTC m=+0.215403492 container remove b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:30 localhost podman[246584]: @ - - [14/Oct/2025:10:15:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19827 "" "Go-http-client/1.1" Oct 14 06:15:30 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:30.789 270389 INFO neutron.agent.linux.ip_lib [-] Device tap397d7ff0-06 cannot be used as it has no MAC address#033[00m Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost kernel: device tap397d7ff0-06 entered promiscuous mode Oct 14 06:15:30 localhost NetworkManager[5972]: [1760436930.8172] manager: (tap397d7ff0-06): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost ovn_controller[156286]: 2025-10-14T10:15:30Z|00233|binding|INFO|Claiming lport 397d7ff0-06f9-4819-8263-d27501006f0b for this chassis. Oct 14 06:15:30 localhost ovn_controller[156286]: 2025-10-14T10:15:30Z|00234|binding|INFO|397d7ff0-06f9-4819-8263-d27501006f0b: Claiming unknown Oct 14 06:15:30 localhost systemd-udevd[331294]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost ovn_controller[156286]: 2025-10-14T10:15:30Z|00235|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b ovn-installed in OVS Oct 14 06:15:30 localhost ovn_controller[156286]: 2025-10-14T10:15:30Z|00236|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b up in Southbound Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:30.828 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=397d7ff0-06f9-4819-8263-d27501006f0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:30.831 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 397d7ff0-06f9-4819-8263-d27501006f0b in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:30.835 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:30.837 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[ceed8c86-758b-44dc-9ca9-83cb5ea02d2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost journal[236030]: ethtool ioctl error on tap397d7ff0-06: No such device Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:30 localhost systemd[1]: var-lib-containers-storage-overlay-1da488ab77a1ac6d0b11a91216b7451bf09b09ce2572518872db77c12e1292b2-merged.mount: Deactivated successfully. Oct 14 06:15:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b610b99abbf43694b1cf75b5bf141864a7cca4a9a0f43028c1ecad071c908e6a-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:30 localhost nova_compute[295778]: 2025-10-14 10:15:30.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.7 KiB/s wr, 74 op/s Oct 14 06:15:31 localhost podman[331364]: Oct 14 06:15:31 localhost podman[331364]: 2025-10-14 10:15:31.663853248 +0000 UTC m=+0.084835459 container create 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:15:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:15:31 localhost systemd[1]: Started libpod-conmon-5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347.scope. Oct 14 06:15:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:15:31 localhost podman[331364]: 2025-10-14 10:15:31.622529768 +0000 UTC m=+0.043511999 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:31 localhost systemd[1]: Started libcrun container. Oct 14 06:15:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a26cd0173b59e9e483e6a5203ebab75b77383650dda9dccc3fcfe73dc4505a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:31 localhost podman[331364]: 2025-10-14 10:15:31.766747824 +0000 UTC m=+0.187730035 container init 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:31 localhost podman[331364]: 2025-10-14 10:15:31.777950042 +0000 UTC m=+0.198932253 container start 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:31 localhost dnsmasq[331405]: started, version 2.85 cachesize 150 Oct 14 06:15:31 localhost dnsmasq[331405]: DNS service limited to local subnets Oct 14 06:15:31 localhost dnsmasq[331405]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:31 localhost dnsmasq[331405]: warning: no upstream servers configured Oct 14 06:15:31 localhost dnsmasq-dhcp[331405]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:31 localhost dnsmasq[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:31 localhost dnsmasq-dhcp[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:31 localhost dnsmasq-dhcp[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:31 localhost podman[331381]: 2025-10-14 10:15:31.818950104 +0000 UTC m=+0.099254892 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container) Oct 14 06:15:31 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:31.841 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:31 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:31.842 270389 INFO neutron.agent.dhcp.agent [None req-e304a4ba-3844-4959-8265-3c8915eb9b3a - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:31 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:31.843 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:28Z, description=, device_id=e4087971-46b9-47a9-bed6-ec82f44e073a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=42e5317d-bd24-4fab-bc11-fa03e3cda433, ip_allocation=immediate, mac_address=fa:16:3e:eb:ec:5e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:19Z, description=, dns_domain=, id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-105903502, port_security_enabled=True, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1718, status=ACTIVE, subnets=['578c7a66-ad97-4e43-9222-8bdb3cb55dcf'], tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:20Z, vlan_transparent=None, network_id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, port_security_enabled=False, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1766, status=DOWN, tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:28Z on network ed9fc40f-a480-44f3-8674-2504cda1a2ad#033[00m Oct 14 06:15:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:15:31 localhost podman[331378]: 2025-10-14 10:15:31.861840864 +0000 UTC m=+0.153994137 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:15:31 localhost podman[331378]: 2025-10-14 10:15:31.872167809 +0000 UTC m=+0.164321052 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:15:31 localhost podman[331381]: 2025-10-14 10:15:31.888568216 +0000 UTC m=+0.168873014 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, version=9.6) Oct 14 06:15:31 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:15:31 localhost systemd[1]: tmp-crun.DCmwCL.mount: Deactivated successfully. Oct 14 06:15:31 localhost nova_compute[295778]: 2025-10-14 10:15:31.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:31 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:15:31 localhost podman[331416]: 2025-10-14 10:15:31.940130887 +0000 UTC m=+0.076437434 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:15:31 localhost podman[331416]: 2025-10-14 10:15:31.96429078 +0000 UTC m=+0.100597327 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:31 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:15:32 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 1 addresses Oct 14 06:15:32 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:32 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:32 localhost podman[331465]: 2025-10-14 10:15:32.123707551 +0000 UTC m=+0.061062845 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:15:32 localhost systemd[1]: tmp-crun.WGauxZ.mount: Deactivated successfully. Oct 14 06:15:32 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:32.165 270389 INFO neutron.agent.dhcp.agent [None req-d2197446-eada-428f-9830-bfc65d32d876 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '397d7ff0-06f9-4819-8263-d27501006f0b', 'e8fadbb1-7fc4-456a-a1ed-199804acaffb'} is completed#033[00m Oct 14 06:15:32 localhost dnsmasq[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:32 localhost dnsmasq-dhcp[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:32 localhost dnsmasq-dhcp[331405]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:32 localhost podman[331504]: 2025-10-14 10:15:32.412392912 +0000 UTC m=+0.059018892 container kill 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:32 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:32.452 270389 INFO neutron.agent.dhcp.agent [None req-4ec61584-8d2a-4980-867f-951bf8405aea - - - - - -] DHCP configuration for ports {'42e5317d-bd24-4fab-bc11-fa03e3cda433'} is completed#033[00m Oct 14 06:15:32 localhost nova_compute[295778]: 2025-10-14 10:15:32.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s Oct 14 06:15:33 localhost openstack_network_exporter[248748]: ERROR 10:15:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:15:33 localhost openstack_network_exporter[248748]: ERROR 10:15:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:15:33 localhost openstack_network_exporter[248748]: ERROR 10:15:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:15:33 localhost openstack_network_exporter[248748]: ERROR 10:15:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:15:33 localhost openstack_network_exporter[248748]: Oct 14 06:15:33 localhost openstack_network_exporter[248748]: ERROR 10:15:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:15:33 localhost openstack_network_exporter[248748]: Oct 14 06:15:34 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:34.283 2 INFO neutron.agent.securitygroups_rpc [None req-64f861fe-a14d-4cf4-a2d9-d8ab32f40bf6 daa37e9562ff4164ba297586fd32a970 8e6e5d2b322d4a35bd40e5b22dbee82d - - default default] Security group member updated ['5738ce03-d625-43e9-892b-9c4d671a952f']#033[00m Oct 14 06:15:34 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:34.298 2 INFO neutron.agent.securitygroups_rpc [None req-325610bf-d019-45ec-92d9-9f436fb13e27 b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:34.342 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:34.382 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ca66e00f-30e9-4afc-92e0-317722f1fdac, ip_allocation=immediate, mac_address=fa:16:3e:9d:42:2e, name=tempest-FloatingIPAdminTestJSON-1727334512, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:19Z, description=, dns_domain=, id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-105903502, port_security_enabled=True, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1718, status=ACTIVE, subnets=['578c7a66-ad97-4e43-9222-8bdb3cb55dcf'], tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:20Z, vlan_transparent=None, network_id=ed9fc40f-a480-44f3-8674-2504cda1a2ad, port_security_enabled=True, project_id=8e6e5d2b322d4a35bd40e5b22dbee82d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5738ce03-d625-43e9-892b-9c4d671a952f'], standard_attr_id=1778, status=DOWN, tags=[], tenant_id=8e6e5d2b322d4a35bd40e5b22dbee82d, updated_at=2025-10-14T10:15:33Z on network ed9fc40f-a480-44f3-8674-2504cda1a2ad#033[00m Oct 14 06:15:34 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 2 addresses Oct 14 06:15:34 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:34 localhost podman[331542]: 2025-10-14 10:15:34.601710895 +0000 UTC m=+0.056161865 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:15:34 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:34 localhost nova_compute[295778]: 2025-10-14 10:15:34.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:15:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:35.059 270389 INFO neutron.agent.dhcp.agent [None req-294072fe-4993-4394-b1f8-89234dd0aedd - - - - - -] DHCP configuration for ports {'ca66e00f-30e9-4afc-92e0-317722f1fdac'} is completed#033[00m Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s Oct 14 06:15:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:35.307 270389 INFO neutron.agent.linux.ip_lib [None req-4053bc64-50a6-44c5-a9f9-9b4a787151b2 - - - - - -] Device tap4e355359-04 cannot be used as it has no MAC address#033[00m Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost kernel: device tap4e355359-04 entered promiscuous mode Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00237|binding|INFO|Claiming lport 4e355359-04e1-474e-9fca-5892b54dbee2 for this chassis. Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00238|binding|INFO|4e355359-04e1-474e-9fca-5892b54dbee2: Claiming unknown Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost NetworkManager[5972]: [1760436935.3355] manager: (tap4e355359-04): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Oct 14 06:15:35 localhost systemd-udevd[331586]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.353 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ad377052-7a70-4723-8afc-3b9c2f0a726f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad377052-7a70-4723-8afc-3b9c2f0a726f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a840994a70374548889747682f4c0fa3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb73290e-12c9-47a8-9645-19f3cd18f1a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4e355359-04e1-474e-9fca-5892b54dbee2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.356 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 4e355359-04e1-474e-9fca-5892b54dbee2 in datapath ad377052-7a70-4723-8afc-3b9c2f0a726f bound to our chassis#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.357 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ad377052-7a70-4723-8afc-3b9c2f0a726f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.360 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[18424283-6f4d-4fd9-a43a-87e25c3d908d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00239|binding|INFO|Setting lport 4e355359-04e1-474e-9fca-5892b54dbee2 ovn-installed in OVS Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00240|binding|INFO|Setting lport 4e355359-04e1-474e-9fca-5892b54dbee2 up in Southbound Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost journal[236030]: ethtool ioctl error on tap4e355359-04: No such device Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost dnsmasq[331405]: exiting on receipt of SIGTERM Oct 14 06:15:35 localhost podman[331602]: 2025-10-14 10:15:35.456608158 +0000 UTC m=+0.061830226 container kill 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:15:35 localhost systemd[1]: libpod-5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347.scope: Deactivated successfully. Oct 14 06:15:35 localhost podman[331631]: 2025-10-14 10:15:35.516636085 +0000 UTC m=+0.040742725 container died 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:35 localhost systemd[1]: var-lib-containers-storage-overlay-01a26cd0173b59e9e483e6a5203ebab75b77383650dda9dccc3fcfe73dc4505a-merged.mount: Deactivated successfully. Oct 14 06:15:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:35 localhost podman[331631]: 2025-10-14 10:15:35.614120529 +0000 UTC m=+0.138227119 container remove 5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:15:35 localhost systemd[1]: libpod-conmon-5ee65c1d971e4d069d3f9fa7eeb663ffb686f1669f1fc5014473c73300ba6347.scope: Deactivated successfully. Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00241|binding|INFO|Releasing lport 397d7ff0-06f9-4819-8263-d27501006f0b from this chassis (sb_readonly=0) Oct 14 06:15:35 localhost ovn_controller[156286]: 2025-10-14T10:15:35Z|00242|binding|INFO|Setting lport 397d7ff0-06f9-4819-8263-d27501006f0b down in Southbound Oct 14 06:15:35 localhost kernel: device tap397d7ff0-06 left promiscuous mode Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.637 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=397d7ff0-06f9-4819-8263-d27501006f0b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.639 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 397d7ff0-06f9-4819-8263-d27501006f0b in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.641 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:35.642 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[119189a4-75b5-48d9-991e-dec04f46fe5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:35.647 2 INFO neutron.agent.securitygroups_rpc [None req-a8fd097b-722c-431b-846d-9b7a91a5b6ed b11f5b75a52243ed86cd4fe28898caef eff4d352999d485c9bd9a3b3cbf0c569 - - default default] Security group member updated ['25c1f9f0-ea5d-4940-9d8c-34da45a09b5d']#033[00m Oct 14 06:15:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:35.685 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:35 localhost nova_compute[295778]: 2025-10-14 10:15:35.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:35.975 270389 INFO neutron.agent.dhcp.agent [None req-5a0211e3-10ce-4966-bfd9-aff8a8c32684 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:35 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:36 localhost podman[331698]: Oct 14 06:15:36 localhost podman[331698]: 2025-10-14 10:15:36.220969813 +0000 UTC m=+0.086187803 container create 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:36 localhost systemd[1]: Started libpod-conmon-8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16.scope. Oct 14 06:15:36 localhost systemd[1]: Started libcrun container. Oct 14 06:15:36 localhost podman[331698]: 2025-10-14 10:15:36.179195971 +0000 UTC m=+0.044413981 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60dcb1b65b71ff85f0fd5cc9c657b62085e32bdea70b8ca1327425382355a90e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:36 localhost podman[331698]: 2025-10-14 10:15:36.290596605 +0000 UTC m=+0.155814595 container init 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:15:36 localhost podman[331698]: 2025-10-14 10:15:36.299530813 +0000 UTC m=+0.164748803 container start 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:36 localhost dnsmasq[331716]: started, version 2.85 cachesize 150 Oct 14 06:15:36 localhost dnsmasq[331716]: DNS service limited to local subnets Oct 14 06:15:36 localhost dnsmasq[331716]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:36 localhost dnsmasq[331716]: warning: no upstream servers configured Oct 14 06:15:36 localhost dnsmasq-dhcp[331716]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:36 localhost dnsmasq[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:36 localhost dnsmasq-dhcp[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:36 localhost dnsmasq-dhcp[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.469277) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936469348, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1486, "num_deletes": 251, "total_data_size": 1285911, "memory_usage": 1315520, "flush_reason": "Manual Compaction"} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936482113, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1244054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27617, "largest_seqno": 29102, "table_properties": {"data_size": 1238021, "index_size": 3247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13884, "raw_average_key_size": 20, "raw_value_size": 1225419, "raw_average_value_size": 1815, "num_data_blocks": 144, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436820, "oldest_key_time": 1760436820, "file_creation_time": 1760436936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 12875 microseconds, and 4994 cpu microseconds. Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.482162) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1244054 bytes OK Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.482184) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.483694) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.483713) EVENT_LOG_v1 {"time_micros": 1760436936483707, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.483768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1279434, prev total WAL file size 1279434, number of live WAL files 2. Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.484413) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1214KB)], [51(15MB)] Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936484460, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 17044211, "oldest_snapshot_seqno": -1} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12555 keys, 15106360 bytes, temperature: kUnknown Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936564030, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 15106360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15037828, "index_size": 36047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31429, "raw_key_size": 339900, "raw_average_key_size": 27, "raw_value_size": 14826747, "raw_average_value_size": 1180, "num_data_blocks": 1336, "num_entries": 12555, "num_filter_entries": 12555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760436936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.564425) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15106360 bytes Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.566304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.0 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 15.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(25.8) write-amplify(12.1) OK, records in: 13078, records dropped: 523 output_compression: NoCompression Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.566334) EVENT_LOG_v1 {"time_micros": 1760436936566320, "job": 30, "event": "compaction_finished", "compaction_time_micros": 79663, "compaction_time_cpu_micros": 44426, "output_level": 6, "num_output_files": 1, "total_output_size": 15106360, "num_input_records": 13078, "num_output_records": 12555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936566645, "job": 30, "event": "table_file_deletion", "file_number": 53} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436936569294, "job": 30, "event": "table_file_deletion", "file_number": 51} Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.484307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.569368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.569376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.569379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.569382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:15:36.569385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:15:36 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:36.620 2 INFO neutron.agent.securitygroups_rpc [None req-08f7afd9-ff65-4858-8bd4-f34f7ffad2b2 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['971079f2-c850-495f-833b-6314800b21a7']#033[00m Oct 14 06:15:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:36.779 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a3f58259-279f-4f85-b5b9-83b16b74d0c9, ip_allocation=immediate, mac_address=fa:16:3e:61:3e:42, name=tempest-PortsIpV6TestJSON-1828150182, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:31Z, description=, dns_domain=, id=ad377052-7a70-4723-8afc-3b9c2f0a726f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1397607064, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34325, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1424, status=ACTIVE, subnets=['2f53b7d9-f5b0-43e4-91a1-3c50f28b0bd4'], tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:32Z, vlan_transparent=None, network_id=ad377052-7a70-4723-8afc-3b9c2f0a726f, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['971079f2-c850-495f-833b-6314800b21a7'], standard_attr_id=1796, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:36Z on network ad377052-7a70-4723-8afc-3b9c2f0a726f#033[00m Oct 14 06:15:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:36.820 270389 INFO neutron.agent.dhcp.agent [None req-540061c1-a75c-4a1c-a4eb-c51d05034733 - - - - - -] DHCP configuration for ports {'143b3897-f3fa-456b-9edc-636bc769c8ed', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:36 localhost dnsmasq[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 1 addresses Oct 14 06:15:36 localhost podman[331733]: 2025-10-14 10:15:36.975050394 +0000 UTC m=+0.060106140 container kill 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:15:36 localhost dnsmasq-dhcp[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:36 localhost dnsmasq-dhcp[331716]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:37.145 270389 INFO neutron.agent.linux.ip_lib [None req-2c51813e-2d9c-4464-8f67-b094601ae0d0 - - - - - -] Device tap41a2fcb0-eb cannot be used as it has no MAC address#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost kernel: device tap41a2fcb0-eb entered promiscuous mode Oct 14 06:15:37 localhost NetworkManager[5972]: [1760436937.1796] manager: (tap41a2fcb0-eb): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00243|binding|INFO|Claiming lport 41a2fcb0-eb19-4e06-80a2-53c214eeebfb for this chassis. Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00244|binding|INFO|41a2fcb0-eb19-4e06-80a2-53c214eeebfb: Claiming unknown Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.193 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-01615b79-42f6-4a63-8381-b989388aa4fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01615b79-42f6-4a63-8381-b989388aa4fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf1be3a6a454996a4414fad306906f1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=711f389e-c8a8-46aa-91f4-7dec1eb61139, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=41a2fcb0-eb19-4e06-80a2-53c214eeebfb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.195 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 41a2fcb0-eb19-4e06-80a2-53c214eeebfb in datapath 01615b79-42f6-4a63-8381-b989388aa4fc bound to our chassis#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.199 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 306e328a-d719-436b-bfa8-dd452ebe85ae IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.199 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01615b79-42f6-4a63-8381-b989388aa4fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.200 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0ac197-98b2-4e9a-8804-cf7fffebe9e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00245|binding|INFO|Setting lport 41a2fcb0-eb19-4e06-80a2-53c214eeebfb ovn-installed in OVS Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00246|binding|INFO|Setting lport 41a2fcb0-eb19-4e06-80a2-53c214eeebfb up in Southbound Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost journal[236030]: ethtool ioctl error on tap41a2fcb0-eb: No such device Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:37.321 270389 INFO neutron.agent.dhcp.agent [None req-3ac5d507-5290-448c-901a-0a974c895562 - - - - - -] DHCP configuration for ports {'a3f58259-279f-4f85-b5b9-83b16b74d0c9'} is completed#033[00m Oct 14 06:15:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:37.712 270389 INFO neutron.agent.linux.ip_lib [None req-d2afb7d6-af5a-489d-92ac-964ef0b4f79b - - - - - -] Device tap969522bd-f1 cannot be used as it has no MAC address#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost kernel: device tap969522bd-f1 entered promiscuous mode Oct 14 06:15:37 localhost NetworkManager[5972]: [1760436937.7445] manager: (tap969522bd-f1): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00247|binding|INFO|Claiming lport 969522bd-f11b-4b1e-9630-71461c95ba3a for this chassis. Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00248|binding|INFO|969522bd-f11b-4b1e-9630-71461c95ba3a: Claiming unknown Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.756 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=969522bd-f11b-4b1e-9630-71461c95ba3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.758 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 969522bd-f11b-4b1e-9630-71461c95ba3a in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.762 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:37.763 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[8a896704-4183-47bd-8d85-01dbd7339137]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:37.786 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00249|binding|INFO|Setting lport 969522bd-f11b-4b1e-9630-71461c95ba3a ovn-installed in OVS Oct 14 06:15:37 localhost ovn_controller[156286]: 2025-10-14T10:15:37Z|00250|binding|INFO|Setting lport 969522bd-f11b-4b1e-9630-71461c95ba3a up in Southbound Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:37 localhost nova_compute[295778]: 2025-10-14 10:15:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:38 localhost podman[331859]: Oct 14 06:15:38 localhost podman[331859]: 2025-10-14 10:15:38.194809494 +0000 UTC m=+0.099857708 container create 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:38 localhost systemd[1]: Started libpod-conmon-4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701.scope. Oct 14 06:15:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:38.240 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:38 localhost podman[331859]: 2025-10-14 10:15:38.144867576 +0000 UTC m=+0.049915820 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:38 localhost systemd[1]: Started libcrun container. Oct 14 06:15:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d893db8081823505758bfc0e60e4290565113d68d45f74aa8ed482a2637e1eee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:38 localhost podman[331859]: 2025-10-14 10:15:38.266693557 +0000 UTC m=+0.171741781 container init 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:15:38 localhost podman[331859]: 2025-10-14 10:15:38.275908922 +0000 UTC m=+0.180957146 container start 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:38 localhost dnsmasq[331888]: started, version 2.85 cachesize 150 Oct 14 06:15:38 localhost dnsmasq[331888]: DNS service limited to local subnets Oct 14 06:15:38 localhost dnsmasq[331888]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:38 localhost dnsmasq[331888]: warning: no upstream servers configured Oct 14 06:15:38 localhost dnsmasq-dhcp[331888]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:15:38 localhost dnsmasq[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/addn_hosts - 0 addresses Oct 14 06:15:38 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/host Oct 14 06:15:38 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/opts Oct 14 06:15:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:38.615 270389 INFO neutron.agent.dhcp.agent [None req-f6301421-8804-4e6a-854a-ff69c2dcd6f1 - - - - - -] DHCP configuration for ports {'a6bc75ca-b1cc-4875-9282-c3ab75c43ca0'} is completed#033[00m Oct 14 06:15:38 localhost dnsmasq[331716]: exiting on receipt of SIGTERM Oct 14 06:15:38 localhost systemd[1]: libpod-8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16.scope: Deactivated successfully. Oct 14 06:15:38 localhost podman[331913]: 2025-10-14 10:15:38.626818857 +0000 UTC m=+0.116500549 container kill 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:15:38 localhost podman[331941]: Oct 14 06:15:38 localhost podman[331951]: 2025-10-14 10:15:38.693857151 +0000 UTC m=+0.060913712 container died 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:38 localhost systemd[1]: tmp-crun.Mrdx1s.mount: Deactivated successfully. Oct 14 06:15:38 localhost podman[331951]: 2025-10-14 10:15:38.733461264 +0000 UTC m=+0.100517785 container cleanup 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:15:38 localhost systemd[1]: libpod-conmon-8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16.scope: Deactivated successfully. Oct 14 06:15:38 localhost podman[331941]: 2025-10-14 10:15:38.747323053 +0000 UTC m=+0.143562590 container create 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:15:38 localhost podman[331941]: 2025-10-14 10:15:38.657274568 +0000 UTC m=+0.053514135 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:38 localhost systemd[1]: Started libpod-conmon-8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc.scope. Oct 14 06:15:38 localhost systemd[1]: Started libcrun container. Oct 14 06:15:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129faeb09450e4fd5f5f81472305b234e9acd7f95935caa96b19b9b31571789b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:38 localhost podman[331958]: 2025-10-14 10:15:38.834683787 +0000 UTC m=+0.184562221 container remove 8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:38 localhost podman[331941]: 2025-10-14 10:15:38.863467293 +0000 UTC m=+0.259706830 container init 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:15:38 localhost podman[331941]: 2025-10-14 10:15:38.872533954 +0000 UTC m=+0.268773491 container start 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:15:38 localhost dnsmasq[331988]: started, version 2.85 cachesize 150 Oct 14 06:15:38 localhost dnsmasq[331988]: DNS service limited to local subnets Oct 14 06:15:38 localhost dnsmasq[331988]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:38 localhost dnsmasq[331988]: warning: no upstream servers configured Oct 14 06:15:38 localhost dnsmasq[331988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:39.088 270389 INFO neutron.agent.dhcp.agent [None req-eaaf7434-66c5-495f-8240-7380a2a97366 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:15:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:15:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:15:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:39.383 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:38Z, description=, device_id=8a1c4112-8c7d-4de7-b044-7b091ec677ce, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=da78ec40-df80-437b-b270-2906117a6f4b, ip_allocation=immediate, mac_address=fa:16:3e:64:a9:63, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:30Z, description=, dns_domain=, id=01615b79-42f6-4a63-8381-b989388aa4fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1044317334, port_security_enabled=True, project_id=7bf1be3a6a454996a4414fad306906f1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44952, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1769, status=ACTIVE, subnets=['9044d74c-91b9-4bd4-9209-5dfb578b82cc'], tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:15:34Z, vlan_transparent=None, network_id=01615b79-42f6-4a63-8381-b989388aa4fc, port_security_enabled=False, project_id=7bf1be3a6a454996a4414fad306906f1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1809, status=DOWN, tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:15:39Z on network 01615b79-42f6-4a63-8381-b989388aa4fc#033[00m Oct 14 06:15:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:39.416 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:38Z, description=, device_id=cbfe3256-e062-425d-8f0e-37d366d3c643, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f27e8880-a03d-4bc9-99c9-5ae535c855dc, ip_allocation=immediate, mac_address=fa:16:3e:ef:b3:b7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['ec5085f0-17d0-437e-aa45-7eb88540c3b8'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:36Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=False, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1811, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:39Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:39 localhost dnsmasq[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/addn_hosts - 1 addresses Oct 14 06:15:39 localhost podman[332029]: 2025-10-14 10:15:39.595260062 +0000 UTC m=+0.062713170 container kill 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:39 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/host Oct 14 06:15:39 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/opts Oct 14 06:15:39 localhost dnsmasq[331988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:39 localhost podman[332038]: 2025-10-14 10:15:39.619764264 +0000 UTC m=+0.065449753 container kill 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:15:39 localhost systemd[1]: var-lib-containers-storage-overlay-60dcb1b65b71ff85f0fd5cc9c657b62085e32bdea70b8ca1327425382355a90e-merged.mount: Deactivated successfully. Oct 14 06:15:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fb66fc4731d7876012577c47002c96eda7de8dff32585cfe8174ac3efabfa16-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:39 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:39.705 2 INFO neutron.agent.securitygroups_rpc [None req-ae8cfac3-21ba-4f3b-a5c6-b8d0c88fe156 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['971079f2-c850-495f-833b-6314800b21a7', '45b76fa0-7c48-480e-a89f-69be4691f61d']#033[00m Oct 14 06:15:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:40.014 270389 INFO neutron.agent.dhcp.agent [None req-485e75dd-73c8-4bba-9798-bba370cdb99b - - - - - -] DHCP configuration for ports {'f27e8880-a03d-4bc9-99c9-5ae535c855dc', 'da78ec40-df80-437b-b270-2906117a6f4b'} is completed#033[00m Oct 14 06:15:40 localhost nova_compute[295778]: 2025-10-14 10:15:40.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:40 localhost podman[332115]: Oct 14 06:15:40 localhost podman[332115]: 2025-10-14 10:15:40.361487555 +0000 UTC m=+0.089346508 container create 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:15:40 localhost systemd[1]: Started libpod-conmon-637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6.scope. Oct 14 06:15:40 localhost systemd[1]: Started libcrun container. Oct 14 06:15:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def6394891d0a7c9828371d77d736b366fd842d6a38213de3fd24647015f2459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:40 localhost podman[332115]: 2025-10-14 10:15:40.318187284 +0000 UTC m=+0.046046267 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:40 localhost podman[332115]: 2025-10-14 10:15:40.427743478 +0000 UTC m=+0.155602431 container init 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:40 localhost podman[332115]: 2025-10-14 10:15:40.436534742 +0000 UTC m=+0.164393705 container start 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:40 localhost dnsmasq[332134]: started, version 2.85 cachesize 150 Oct 14 06:15:40 localhost dnsmasq[332134]: DNS service limited to local subnets Oct 14 06:15:40 localhost dnsmasq[332134]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:40 localhost dnsmasq[332134]: warning: no upstream servers configured Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:40 localhost dnsmasq[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 1 addresses Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:40.494 270389 INFO neutron.agent.dhcp.agent [None req-3c6818eb-a3cb-479b-8c4e-fc91b8e62d30 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a3f58259-279f-4f85-b5b9-83b16b74d0c9, ip_allocation=immediate, mac_address=fa:16:3e:61:3e:42, name=tempest-PortsIpV6TestJSON-155648009, network_id=ad377052-7a70-4723-8afc-3b9c2f0a726f, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['45b76fa0-7c48-480e-a89f-69be4691f61d'], standard_attr_id=1796, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:39Z on network ad377052-7a70-4723-8afc-3b9c2f0a726f#033[00m Oct 14 06:15:40 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:40.555 2 INFO neutron.agent.securitygroups_rpc [None req-a8d73588-3b29-4fe7-973c-8b131610fa4b daa37e9562ff4164ba297586fd32a970 8e6e5d2b322d4a35bd40e5b22dbee82d - - default default] Security group member updated ['5738ce03-d625-43e9-892b-9c4d671a952f']#033[00m Oct 14 06:15:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:40.610 270389 INFO neutron.agent.dhcp.agent [None req-d207d001-46e2-4727-8d81-6eaa04901246 - - - - - -] DHCP configuration for ports {'a3f58259-279f-4f85-b5b9-83b16b74d0c9', '143b3897-f3fa-456b-9edc-636bc769c8ed', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:40 localhost podman[332152]: 2025-10-14 10:15:40.693616562 +0000 UTC m=+0.066210653 container kill 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:40 localhost dnsmasq[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 1 addresses Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:40 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:40 localhost systemd[1]: tmp-crun.N8lHjo.mount: Deactivated successfully. Oct 14 06:15:40 localhost podman[332185]: 2025-10-14 10:15:40.824255417 +0000 UTC m=+0.068552915 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:40 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 1 addresses Oct 14 06:15:40 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:40 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:40 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:40.914 2 INFO neutron.agent.securitygroups_rpc [None req-0a8d036a-6be5-4605-bfe1-862cd29794d4 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['45b76fa0-7c48-480e-a89f-69be4691f61d']#033[00m Oct 14 06:15:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:40.931 270389 INFO neutron.agent.dhcp.agent [None req-82fd5e25-601b-4d69-b034-204414758cec - - - - - -] DHCP configuration for ports {'a3f58259-279f-4f85-b5b9-83b16b74d0c9'} is completed#033[00m Oct 14 06:15:41 localhost dnsmasq[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/addn_hosts - 0 addresses Oct 14 06:15:41 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/host Oct 14 06:15:41 localhost podman[332229]: 2025-10-14 10:15:41.052102049 +0000 UTC m=+0.057558103 container kill caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:15:41 localhost dnsmasq-dhcp[328765]: read /var/lib/neutron/dhcp/d9e53ed8-ad92-47c7-993a-500ed592c18d/opts Oct 14 06:15:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:41.127 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:41 localhost ovn_controller[156286]: 2025-10-14T10:15:41Z|00251|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:15:41 localhost ovn_controller[156286]: 2025-10-14T10:15:41Z|00252|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:15:41 localhost ovn_controller[156286]: 2025-10-14T10:15:41Z|00253|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:15:41 localhost nova_compute[295778]: 2025-10-14 10:15:41.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:41 localhost nova_compute[295778]: 2025-10-14 10:15:41.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:41 localhost nova_compute[295778]: 2025-10-14 10:15:41.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:41 localhost dnsmasq[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:41 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:41 localhost dnsmasq-dhcp[332134]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:41 localhost podman[332259]: 2025-10-14 10:15:41.197824216 +0000 UTC m=+0.066309146 container kill 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:15:41 localhost ovn_controller[156286]: 2025-10-14T10:15:41Z|00254|binding|INFO|Releasing lport 3f587c0d-9169-4fce-9902-0017eddbdea0 from this chassis (sb_readonly=0) Oct 14 06:15:41 localhost nova_compute[295778]: 2025-10-14 10:15:41.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:41 localhost ovn_controller[156286]: 2025-10-14T10:15:41Z|00255|binding|INFO|Setting lport 3f587c0d-9169-4fce-9902-0017eddbdea0 down in Southbound Oct 14 06:15:41 localhost kernel: device tap3f587c0d-91 left promiscuous mode Oct 14 06:15:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:41.536 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d9e53ed8-ad92-47c7-993a-500ed592c18d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9e53ed8-ad92-47c7-993a-500ed592c18d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '458840010c184f038de4a002f5b46e4a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a448f1b-677d-4a0a-8950-90770ecd1465, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f587c0d-9169-4fce-9902-0017eddbdea0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:41.538 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 3f587c0d-9169-4fce-9902-0017eddbdea0 in datapath d9e53ed8-ad92-47c7-993a-500ed592c18d unbound from our chassis#033[00m Oct 14 06:15:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:41.541 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9e53ed8-ad92-47c7-993a-500ed592c18d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:41.542 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[72318541-27c9-4e7b-9627-4976e0200528]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:41 localhost nova_compute[295778]: 2025-10-14 10:15:41.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:41.918 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:38Z, description=, device_id=cbfe3256-e062-425d-8f0e-37d366d3c643, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f27e8880-a03d-4bc9-99c9-5ae535c855dc, ip_allocation=immediate, mac_address=fa:16:3e:ef:b3:b7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['ec5085f0-17d0-437e-aa45-7eb88540c3b8'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:36Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=False, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1811, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:39Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:42 localhost systemd[1]: tmp-crun.SCJlhi.mount: Deactivated successfully. Oct 14 06:15:42 localhost podman[332301]: 2025-10-14 10:15:42.007511396 +0000 UTC m=+0.071448392 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:42 localhost dnsmasq[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/addn_hosts - 0 addresses Oct 14 06:15:42 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/host Oct 14 06:15:42 localhost dnsmasq-dhcp[330695]: read /var/lib/neutron/dhcp/ed9fc40f-a480-44f3-8674-2504cda1a2ad/opts Oct 14 06:15:42 localhost dnsmasq[331988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:15:42 localhost podman[332334]: 2025-10-14 10:15:42.135637235 +0000 UTC m=+0.067993431 container kill 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:42 localhost ovn_controller[156286]: 2025-10-14T10:15:42Z|00256|binding|INFO|Releasing lport 603e7248-173f-4d7a-9c09-0d9bc9b4624e from this chassis (sb_readonly=0) Oct 14 06:15:42 localhost nova_compute[295778]: 2025-10-14 10:15:42.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:42 localhost ovn_controller[156286]: 2025-10-14T10:15:42Z|00257|binding|INFO|Setting lport 603e7248-173f-4d7a-9c09-0d9bc9b4624e down in Southbound Oct 14 06:15:42 localhost kernel: device tap603e7248-17 left promiscuous mode Oct 14 06:15:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:42.253 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ed9fc40f-a480-44f3-8674-2504cda1a2ad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed9fc40f-a480-44f3-8674-2504cda1a2ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e6e5d2b322d4a35bd40e5b22dbee82d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=708441c7-6de6-4a96-9516-bf9d4722d80d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=603e7248-173f-4d7a-9c09-0d9bc9b4624e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:42.255 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 603e7248-173f-4d7a-9c09-0d9bc9b4624e in datapath ed9fc40f-a480-44f3-8674-2504cda1a2ad unbound from our chassis#033[00m Oct 14 06:15:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:42.258 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed9fc40f-a480-44f3-8674-2504cda1a2ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:42.259 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc1e831-8071-4c85-9302-e57c072f6a9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:42 localhost nova_compute[295778]: 2025-10-14 10:15:42.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:42.389 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:38Z, description=, device_id=8a1c4112-8c7d-4de7-b044-7b091ec677ce, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=da78ec40-df80-437b-b270-2906117a6f4b, ip_allocation=immediate, mac_address=fa:16:3e:64:a9:63, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:15:30Z, description=, dns_domain=, id=01615b79-42f6-4a63-8381-b989388aa4fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1044317334, port_security_enabled=True, project_id=7bf1be3a6a454996a4414fad306906f1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44952, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1769, status=ACTIVE, subnets=['9044d74c-91b9-4bd4-9209-5dfb578b82cc'], tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:15:34Z, vlan_transparent=None, network_id=01615b79-42f6-4a63-8381-b989388aa4fc, port_security_enabled=False, project_id=7bf1be3a6a454996a4414fad306906f1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1809, status=DOWN, tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:15:39Z on network 01615b79-42f6-4a63-8381-b989388aa4fc#033[00m Oct 14 06:15:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:42.398 270389 INFO neutron.agent.dhcp.agent [None req-e8dded91-ea58-43e8-b14e-9e8e235b6db6 - - - - - -] DHCP configuration for ports {'f27e8880-a03d-4bc9-99c9-5ae535c855dc'} is completed#033[00m Oct 14 06:15:42 localhost systemd[1]: tmp-crun.h5zMTQ.mount: Deactivated successfully. Oct 14 06:15:42 localhost podman[332376]: 2025-10-14 10:15:42.639668414 +0000 UTC m=+0.094663960 container kill 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:42 localhost nova_compute[295778]: 2025-10-14 10:15:42.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:42 localhost dnsmasq[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/addn_hosts - 1 addresses Oct 14 06:15:42 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/host Oct 14 06:15:42 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/opts Oct 14 06:15:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:42.842 270389 INFO neutron.agent.dhcp.agent [None req-94c97c86-83f0-4f7d-aefc-688a0d2f2866 - - - - - -] DHCP configuration for ports {'da78ec40-df80-437b-b270-2906117a6f4b'} is completed#033[00m Oct 14 06:15:42 localhost nova_compute[295778]: 2025-10-14 10:15:42.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:43 localhost podman[332413]: 2025-10-14 10:15:43.259300968 +0000 UTC m=+0.056947177 container kill 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:43 localhost dnsmasq[330695]: exiting on receipt of SIGTERM Oct 14 06:15:43 localhost systemd[1]: libpod-1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553.scope: Deactivated successfully. Oct 14 06:15:43 localhost podman[332425]: 2025-10-14 10:15:43.325253762 +0000 UTC m=+0.051137761 container died 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:43 localhost podman[332425]: 2025-10-14 10:15:43.355937118 +0000 UTC m=+0.081821077 container cleanup 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:43 localhost systemd[1]: libpod-conmon-1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553.scope: Deactivated successfully. Oct 14 06:15:43 localhost podman[332427]: 2025-10-14 10:15:43.408051895 +0000 UTC m=+0.127578015 container remove 1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ed9fc40f-a480-44f3-8674-2504cda1a2ad, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:43.460 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:43 localhost dnsmasq[332134]: exiting on receipt of SIGTERM Oct 14 06:15:43 localhost podman[332467]: 2025-10-14 10:15:43.522046438 +0000 UTC m=+0.056155066 container kill 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:15:43 localhost systemd[1]: libpod-637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6.scope: Deactivated successfully. Oct 14 06:15:43 localhost podman[332478]: 2025-10-14 10:15:43.573596959 +0000 UTC m=+0.043324234 container died 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:43 localhost systemd[1]: var-lib-containers-storage-overlay-70c55f420f1eb15759f0a725be88d7e1901d1a9d87995a525e4d7c0e197f60b7-merged.mount: Deactivated successfully. Oct 14 06:15:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f505005d1675962e228721d7e0ea810a9f528c08664082ff7c1d97e4d83c553-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:43 localhost systemd[1]: run-netns-qdhcp\x2ded9fc40f\x2da480\x2d44f3\x2d8674\x2d2504cda1a2ad.mount: Deactivated successfully. Oct 14 06:15:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:43 localhost systemd[1]: var-lib-containers-storage-overlay-def6394891d0a7c9828371d77d736b366fd842d6a38213de3fd24647015f2459-merged.mount: Deactivated successfully. Oct 14 06:15:43 localhost podman[332478]: 2025-10-14 10:15:43.651025779 +0000 UTC m=+0.120753004 container cleanup 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:43 localhost systemd[1]: libpod-conmon-637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6.scope: Deactivated successfully. Oct 14 06:15:43 localhost podman[332488]: 2025-10-14 10:15:43.676575199 +0000 UTC m=+0.134073958 container remove 637984b7b6175a74ea8f82effcfd84615d64a0c2ab2caacde1afc67215675dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:43 localhost dnsmasq[331988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:43 localhost podman[332542]: 2025-10-14 10:15:43.969696837 +0000 UTC m=+0.063923812 container kill 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:44 localhost nova_compute[295778]: 2025-10-14 10:15:44.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:44 localhost ovn_controller[156286]: 2025-10-14T10:15:44Z|00258|binding|INFO|Releasing lport 969522bd-f11b-4b1e-9630-71461c95ba3a from this chassis (sb_readonly=0) Oct 14 06:15:44 localhost ovn_controller[156286]: 2025-10-14T10:15:44Z|00259|binding|INFO|Setting lport 969522bd-f11b-4b1e-9630-71461c95ba3a down in Southbound Oct 14 06:15:44 localhost kernel: device tap969522bd-f1 left promiscuous mode Oct 14 06:15:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:44.302 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=969522bd-f11b-4b1e-9630-71461c95ba3a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:44.303 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 969522bd-f11b-4b1e-9630-71461c95ba3a in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:15:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:44.305 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74049e43-4aa7-4318-9233-a58980c3495b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:44 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:44.306 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[48a56b7b-224e-413b-b8fb-1d567e301d05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:44 localhost nova_compute[295778]: 2025-10-14 10:15:44.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:44.414 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:15:44 localhost podman[332596]: 2025-10-14 10:15:44.567612714 +0000 UTC m=+0.103120775 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:15:44 localhost podman[332596]: 2025-10-14 10:15:44.579297285 +0000 UTC m=+0.114805336 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:44 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:15:44 localhost podman[332607]: Oct 14 06:15:44 localhost podman[332607]: 2025-10-14 10:15:44.644310164 +0000 UTC m=+0.154931393 container create 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:15:44 localhost podman[332607]: 2025-10-14 10:15:44.59718798 +0000 UTC m=+0.107809249 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:44 localhost systemd[1]: Started libpod-conmon-28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039.scope. Oct 14 06:15:44 localhost systemd[1]: tmp-crun.JnRuAu.mount: Deactivated successfully. Oct 14 06:15:44 localhost systemd[1]: Started libcrun container. Oct 14 06:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349030665fe1fa6bc1fa06224b684dd63cbd577983109f2b5702a57f62a9951a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:44 localhost podman[332607]: 2025-10-14 10:15:44.731772511 +0000 UTC m=+0.242393740 container init 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:15:44 localhost podman[332607]: 2025-10-14 10:15:44.740982936 +0000 UTC m=+0.251604165 container start 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:44 localhost dnsmasq[332639]: started, version 2.85 cachesize 150 Oct 14 06:15:44 localhost dnsmasq[332639]: DNS service limited to local subnets Oct 14 06:15:44 localhost dnsmasq[332639]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:44 localhost dnsmasq[332639]: warning: no upstream servers configured Oct 14 06:15:44 localhost dnsmasq-dhcp[332639]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:44 localhost dnsmasq[332639]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:44 localhost dnsmasq-dhcp[332639]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:44 localhost dnsmasq-dhcp[332639]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:44.959 270389 INFO neutron.agent.dhcp.agent [None req-0e832dd7-fbaf-498f-84c8-5108b52f83aa - - - - - -] DHCP configuration for ports {'143b3897-f3fa-456b-9edc-636bc769c8ed', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:45 localhost dnsmasq[332639]: exiting on receipt of SIGTERM Oct 14 06:15:45 localhost podman[332657]: 2025-10-14 10:15:45.095825876 +0000 UTC m=+0.059908985 container kill 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:15:45 localhost systemd[1]: libpod-28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039.scope: Deactivated successfully. Oct 14 06:15:45 localhost nova_compute[295778]: 2025-10-14 10:15:45.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:45 localhost podman[332671]: 2025-10-14 10:15:45.173883313 +0000 UTC m=+0.059661148 container died 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:45 localhost podman[332671]: 2025-10-14 10:15:45.21662464 +0000 UTC m=+0.102402415 container remove 28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:15:45 localhost systemd[1]: libpod-conmon-28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039.scope: Deactivated successfully. Oct 14 06:15:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:45 localhost nova_compute[295778]: 2025-10-14 10:15:45.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:45 localhost dnsmasq[331988]: exiting on receipt of SIGTERM Oct 14 06:15:45 localhost podman[332714]: 2025-10-14 10:15:45.596636889 +0000 UTC m=+0.056508404 container kill 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:15:45 localhost systemd[1]: libpod-8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc.scope: Deactivated successfully. Oct 14 06:15:45 localhost systemd[1]: var-lib-containers-storage-overlay-349030665fe1fa6bc1fa06224b684dd63cbd577983109f2b5702a57f62a9951a-merged.mount: Deactivated successfully. Oct 14 06:15:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28fee95a7a8c7d5c8ced49f3e48d29d26d45d7d71e49a0adcc59f094bb401039-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:45 localhost podman[332728]: 2025-10-14 10:15:45.66881943 +0000 UTC m=+0.057205734 container died 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:45 localhost systemd[1]: tmp-crun.WiA3Ie.mount: Deactivated successfully. Oct 14 06:15:45 localhost podman[332728]: 2025-10-14 10:15:45.694218566 +0000 UTC m=+0.082604860 container cleanup 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:45 localhost systemd[1]: libpod-conmon-8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc.scope: Deactivated successfully. Oct 14 06:15:45 localhost podman[332729]: 2025-10-14 10:15:45.743631191 +0000 UTC m=+0.124800592 container remove 8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:15:45 localhost dnsmasq[328765]: exiting on receipt of SIGTERM Oct 14 06:15:45 localhost podman[332769]: 2025-10-14 10:15:45.869450677 +0000 UTC m=+0.055989091 container kill caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:15:45 localhost systemd[1]: libpod-caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b.scope: Deactivated successfully. Oct 14 06:15:45 localhost podman[332782]: 2025-10-14 10:15:45.944287368 +0000 UTC m=+0.061190149 container died caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:46 localhost podman[332782]: 2025-10-14 10:15:46.027985675 +0000 UTC m=+0.144888406 container cleanup caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:15:46 localhost systemd[1]: libpod-conmon-caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b.scope: Deactivated successfully. Oct 14 06:15:46 localhost podman[332790]: 2025-10-14 10:15:46.050153235 +0000 UTC m=+0.151233595 container remove caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9e53ed8-ad92-47c7-993a-500ed592c18d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:15:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:46.084 270389 INFO neutron.agent.dhcp.agent [None req-c2122810-2fa8-423a-9312-90d636bcc04f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:46.087 270389 INFO neutron.agent.dhcp.agent [None req-0a045a9d-cf70-49f9-baa8-8584a3d7a7b7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:46 localhost dnsmasq[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/addn_hosts - 0 addresses Oct 14 06:15:46 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/host Oct 14 06:15:46 localhost podman[332835]: 2025-10-14 10:15:46.144342061 +0000 UTC m=+0.053098164 container kill 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:15:46 localhost dnsmasq-dhcp[331888]: read /var/lib/neutron/dhcp/01615b79-42f6-4a63-8381-b989388aa4fc/opts Oct 14 06:15:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:46.339 2 INFO neutron.agent.securitygroups_rpc [None req-502c9346-9c9b-4520-9303-69df7912cd6d 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['c7522168-79ac-4334-a811-1abcc722b92a']#033[00m Oct 14 06:15:46 localhost nova_compute[295778]: 2025-10-14 10:15:46.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:46 localhost ovn_controller[156286]: 2025-10-14T10:15:46Z|00260|binding|INFO|Releasing lport 41a2fcb0-eb19-4e06-80a2-53c214eeebfb from this chassis (sb_readonly=0) Oct 14 06:15:46 localhost ovn_controller[156286]: 2025-10-14T10:15:46Z|00261|binding|INFO|Setting lport 41a2fcb0-eb19-4e06-80a2-53c214eeebfb down in Southbound Oct 14 06:15:46 localhost kernel: device tap41a2fcb0-eb left promiscuous mode Oct 14 06:15:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:46.393 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-01615b79-42f6-4a63-8381-b989388aa4fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01615b79-42f6-4a63-8381-b989388aa4fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf1be3a6a454996a4414fad306906f1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=711f389e-c8a8-46aa-91f4-7dec1eb61139, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=41a2fcb0-eb19-4e06-80a2-53c214eeebfb) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:46.395 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 41a2fcb0-eb19-4e06-80a2-53c214eeebfb in datapath 01615b79-42f6-4a63-8381-b989388aa4fc unbound from our chassis#033[00m Oct 14 06:15:46 localhost nova_compute[295778]: 2025-10-14 10:15:46.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:46.398 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01615b79-42f6-4a63-8381-b989388aa4fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:46.399 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[25f90f1f-d9ce-4dd5-9b3e-4944460c1778]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:46 localhost systemd[1]: var-lib-containers-storage-overlay-129faeb09450e4fd5f5f81472305b234e9acd7f95935caa96b19b9b31571789b-merged.mount: Deactivated successfully. Oct 14 06:15:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8255ba66623c1bc530bcbd849f4663c459344310ea21d9f734bf8c940fa2bedc-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:46 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:15:46 localhost systemd[1]: var-lib-containers-storage-overlay-0cfafd6841f9025686edd6ba3227036050805ec1a9f52a93901df5cecb80afa6-merged.mount: Deactivated successfully. Oct 14 06:15:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-caf6e733cdb69ec80931c23fbed948582b8fb830ab0ed6445030eb574cce5a0b-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:46 localhost systemd[1]: run-netns-qdhcp\x2dd9e53ed8\x2dad92\x2d47c7\x2d993a\x2d500ed592c18d.mount: Deactivated successfully. Oct 14 06:15:46 localhost podman[332896]: Oct 14 06:15:46 localhost podman[332896]: 2025-10-14 10:15:46.777959087 +0000 UTC m=+0.094018982 container create 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:15:46 localhost systemd[1]: Started libpod-conmon-06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75.scope. Oct 14 06:15:46 localhost podman[332896]: 2025-10-14 10:15:46.729915969 +0000 UTC m=+0.045975914 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:46.829 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:46 localhost systemd[1]: Started libcrun container. Oct 14 06:15:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492f5a01aefded52989145af429b87192858b390eef350a9f4a36ddc3dccc06c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:46 localhost podman[332896]: 2025-10-14 10:15:46.858703726 +0000 UTC m=+0.174763601 container init 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:46 localhost podman[332896]: 2025-10-14 10:15:46.867871849 +0000 UTC m=+0.183931724 container start 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:15:46 localhost dnsmasq[332914]: started, version 2.85 cachesize 150 Oct 14 06:15:46 localhost dnsmasq[332914]: DNS service limited to local subnets Oct 14 06:15:46 localhost dnsmasq[332914]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:46 localhost dnsmasq[332914]: warning: no upstream servers configured Oct 14 06:15:46 localhost dnsmasq-dhcp[332914]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:46 localhost dnsmasq-dhcp[332914]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:46 localhost dnsmasq[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:46 localhost dnsmasq-dhcp[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:46 localhost dnsmasq-dhcp[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:46.927 270389 INFO neutron.agent.dhcp.agent [None req-2ca1a05d-be28-4caf-a318-51c6c52258b5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:45Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c157a1f1-e033-4032-8d5e-d3eb94cc40fe, ip_allocation=immediate, mac_address=fa:16:3e:59:27:59, name=tempest-PortsIpV6TestJSON-1966732346, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:31Z, description=, dns_domain=, id=ad377052-7a70-4723-8afc-3b9c2f0a726f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1397607064, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34325, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1424, status=ACTIVE, subnets=['3c69942e-cbd4-43dd-bc77-2e1b5fdf3515', 'a3bfa31b-a341-4fc0-8b15-027d5915f8fe'], tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:43Z, vlan_transparent=None, network_id=ad377052-7a70-4723-8afc-3b9c2f0a726f, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c7522168-79ac-4334-a811-1abcc722b92a'], standard_attr_id=1834, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:46Z on network ad377052-7a70-4723-8afc-3b9c2f0a726f#033[00m Oct 14 06:15:47 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:47.097 270389 INFO neutron.agent.dhcp.agent [None req-13d49099-69d0-4757-9d5f-9b35bbd33c32 - - - - - -] DHCP configuration for ports {'143b3897-f3fa-456b-9edc-636bc769c8ed', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:47 localhost dnsmasq[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 1 addresses Oct 14 06:15:47 localhost dnsmasq-dhcp[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:47 localhost dnsmasq-dhcp[332914]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:47 localhost podman[332934]: 2025-10-14 10:15:47.117801568 +0000 UTC m=+0.059315479 container kill 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:15:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:47 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:47.334 270389 INFO neutron.agent.dhcp.agent [None req-e4ae0169-a2e6-4752-aa86-3e195c2baff9 - - - - - -] DHCP configuration for ports {'c157a1f1-e033-4032-8d5e-d3eb94cc40fe'} is completed#033[00m Oct 14 06:15:47 localhost dnsmasq[331888]: exiting on receipt of SIGTERM Oct 14 06:15:47 localhost podman[332974]: 2025-10-14 10:15:47.620133722 +0000 UTC m=+0.057759777 container kill 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:47 localhost systemd[1]: libpod-4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701.scope: Deactivated successfully. Oct 14 06:15:47 localhost podman[332989]: 2025-10-14 10:15:47.693561006 +0000 UTC m=+0.057406759 container died 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:47 localhost podman[332989]: 2025-10-14 10:15:47.724615522 +0000 UTC m=+0.088461245 container cleanup 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:15:47 localhost systemd[1]: libpod-conmon-4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701.scope: Deactivated successfully. Oct 14 06:15:47 localhost podman[332990]: 2025-10-14 10:15:47.813347522 +0000 UTC m=+0.170842786 container remove 4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01615b79-42f6-4a63-8381-b989388aa4fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:15:47 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:47.845 270389 INFO neutron.agent.dhcp.agent [None req-006f92d6-617e-4ec0-9204-d6d47ac3a167 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:47 localhost nova_compute[295778]: 2025-10-14 10:15:47.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:48 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:48.035 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:48 localhost nova_compute[295778]: 2025-10-14 10:15:48.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:48 localhost dnsmasq[332914]: exiting on receipt of SIGTERM Oct 14 06:15:48 localhost podman[333034]: 2025-10-14 10:15:48.577796959 +0000 UTC m=+0.058552649 container kill 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:15:48 localhost systemd[1]: libpod-06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75.scope: Deactivated successfully. Oct 14 06:15:48 localhost systemd[1]: var-lib-containers-storage-overlay-d893db8081823505758bfc0e60e4290565113d68d45f74aa8ed482a2637e1eee-merged.mount: Deactivated successfully. Oct 14 06:15:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fac48b4ff313017a1c55b3383384f817a4cffed944064a8f6789d1e7d287701-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:48 localhost systemd[1]: run-netns-qdhcp\x2d01615b79\x2d42f6\x2d4a63\x2d8381\x2db989388aa4fc.mount: Deactivated successfully. Oct 14 06:15:48 localhost podman[333046]: 2025-10-14 10:15:48.668130342 +0000 UTC m=+0.076080635 container died 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:15:48 localhost systemd[1]: tmp-crun.j9RfSC.mount: Deactivated successfully. Oct 14 06:15:48 localhost podman[333046]: 2025-10-14 10:15:48.711225589 +0000 UTC m=+0.119175852 container cleanup 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:48 localhost systemd[1]: libpod-conmon-06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75.scope: Deactivated successfully. Oct 14 06:15:48 localhost podman[333048]: 2025-10-14 10:15:48.754260313 +0000 UTC m=+0.155269221 container remove 06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:15:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:15:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3918562243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:15:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:15:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3918562243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:15:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:15:49 localhost podman[333074]: 2025-10-14 10:15:49.547003673 +0000 UTC m=+0.085471594 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:15:49 localhost podman[333074]: 2025-10-14 10:15:49.55701793 +0000 UTC m=+0.095485881 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:15:49 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:15:49 localhost podman[333075]: 2025-10-14 10:15:49.648749411 +0000 UTC m=+0.186041031 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-492f5a01aefded52989145af429b87192858b390eef350a9f4a36ddc3dccc06c-merged.mount: Deactivated successfully. Oct 14 06:15:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06a8cd8892a280510c3c7d759dd4aa68e2dea8d5676853e3ce14e00f5aacea75-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:49 localhost podman[333075]: 2025-10-14 10:15:49.661064688 +0000 UTC m=+0.198356268 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:15:49 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:15:49 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:49.740 2 INFO neutron.agent.securitygroups_rpc [None req-4ce9037b-3a14-4802-8951-61a892782857 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['04a003a3-9634-4c19-bd44-c2ff00c6dace', 'c7522168-79ac-4334-a811-1abcc722b92a', '64c7cb4a-1e23-4a29-b5a6-11af05e1b20e']#033[00m Oct 14 06:15:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:50.315 270389 INFO neutron.agent.linux.ip_lib [None req-10cde94a-cbb8-4031-82f9-ff844335835c - - - - - -] Device tapa1df01fc-d1 cannot be used as it has no MAC address#033[00m Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost kernel: device tapa1df01fc-d1 entered promiscuous mode Oct 14 06:15:50 localhost NetworkManager[5972]: [1760436950.3553] manager: (tapa1df01fc-d1): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost ovn_controller[156286]: 2025-10-14T10:15:50Z|00262|binding|INFO|Claiming lport a1df01fc-d199-4a1e-af67-e72b780e35b7 for this chassis. Oct 14 06:15:50 localhost ovn_controller[156286]: 2025-10-14T10:15:50Z|00263|binding|INFO|a1df01fc-d199-4a1e-af67-e72b780e35b7: Claiming unknown Oct 14 06:15:50 localhost systemd-udevd[333139]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:15:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:50.379 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed8:a17a/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a1df01fc-d199-4a1e-af67-e72b780e35b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:50.381 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a1df01fc-d199-4a1e-af67-e72b780e35b7 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:15:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:50.384 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:50.385 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:50.386 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ede79f-66f8-4f87-bcf1-1dca8754ae22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost ovn_controller[156286]: 2025-10-14T10:15:50Z|00264|binding|INFO|Setting lport a1df01fc-d199-4a1e-af67-e72b780e35b7 ovn-installed in OVS Oct 14 06:15:50 localhost ovn_controller[156286]: 2025-10-14T10:15:50Z|00265|binding|INFO|Setting lport a1df01fc-d199-4a1e-af67-e72b780e35b7 up in Southbound Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost journal[236030]: ethtool ioctl error on tapa1df01fc-d1: No such device Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost nova_compute[295778]: 2025-10-14 10:15:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:50 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:50.694 2 INFO neutron.agent.securitygroups_rpc [None req-ead15db7-821b-4ac4-934e-f2fb9da96f57 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['04a003a3-9634-4c19-bd44-c2ff00c6dace', '64c7cb4a-1e23-4a29-b5a6-11af05e1b20e']#033[00m Oct 14 06:15:50 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:50.931 2 INFO neutron.agent.securitygroups_rpc [None req-096ef0b6-80e0-4a9f-bb4f-2c49854ab138 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['3dc93998-b54b-4d14-b147-1dfdbe73ed61']#033[00m Oct 14 06:15:51 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:51.006 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:51 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:51.008 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:15:51 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:51.011 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:15:51 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:51.011 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:51 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:51.012 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0bab368d-109e-4354-a0c8-ee64550b8f96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:51 localhost podman[333224]: Oct 14 06:15:51 localhost podman[333224]: 2025-10-14 10:15:51.111627799 +0000 UTC m=+0.092713338 container create 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:15:51 localhost podman[333224]: 2025-10-14 10:15:51.06432928 +0000 UTC m=+0.045414869 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:51 localhost systemd[1]: Started libpod-conmon-6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9.scope. Oct 14 06:15:51 localhost systemd[1]: Started libcrun container. Oct 14 06:15:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f0e02ca3b4e9af3fe019055fb4111b8976b2e6ddbe608fe1c00245a8183cde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:51 localhost podman[333224]: 2025-10-14 10:15:51.198028538 +0000 UTC m=+0.179114087 container init 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:51 localhost podman[333224]: 2025-10-14 10:15:51.214529836 +0000 UTC m=+0.195615375 container start 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:51 localhost dnsmasq[333257]: started, version 2.85 cachesize 150 Oct 14 06:15:51 localhost dnsmasq[333257]: DNS service limited to local subnets Oct 14 06:15:51 localhost dnsmasq[333257]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:51 localhost dnsmasq[333257]: warning: no upstream servers configured Oct 14 06:15:51 localhost dnsmasq-dhcp[333257]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:51 localhost dnsmasq-dhcp[333257]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Oct 14 06:15:51 localhost dnsmasq-dhcp[333257]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:51 localhost dnsmasq[333257]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 1 addresses Oct 14 06:15:51 localhost dnsmasq-dhcp[333257]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:51 localhost dnsmasq-dhcp[333257]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.277 270389 INFO neutron.agent.dhcp.agent [None req-a9146fa9-0b79-47b3-be48-065fe992d3f0 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:45Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c157a1f1-e033-4032-8d5e-d3eb94cc40fe, ip_allocation=immediate, mac_address=fa:16:3e:59:27:59, name=tempest-PortsIpV6TestJSON-363952000, network_id=ad377052-7a70-4723-8afc-3b9c2f0a726f, port_security_enabled=True, project_id=a840994a70374548889747682f4c0fa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['04a003a3-9634-4c19-bd44-c2ff00c6dace', '64c7cb4a-1e23-4a29-b5a6-11af05e1b20e'], standard_attr_id=1834, status=DOWN, tags=[], tenant_id=a840994a70374548889747682f4c0fa3, updated_at=2025-10-14T10:15:49Z on network ad377052-7a70-4723-8afc-3b9c2f0a726f#033[00m Oct 14 06:15:51 localhost dnsmasq[333257]: exiting on receipt of SIGTERM Oct 14 06:15:51 localhost systemd[1]: libpod-6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9.scope: Deactivated successfully. Oct 14 06:15:51 localhost podman[333264]: 2025-10-14 10:15:51.330369398 +0000 UTC m=+0.082961378 container died 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:15:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:51.341 2 INFO neutron.agent.securitygroups_rpc [None req-b689f1db-8f1b-40c9-a2dc-453b39103118 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['3dc93998-b54b-4d14-b147-1dfdbe73ed61']#033[00m Oct 14 06:15:51 localhost podman[333264]: 2025-10-14 10:15:51.360215661 +0000 UTC m=+0.112807611 container cleanup 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:15:51 localhost podman[333282]: 2025-10-14 10:15:51.41014114 +0000 UTC m=+0.079394003 container cleanup 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:51 localhost systemd[1]: libpod-conmon-6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9.scope: Deactivated successfully. Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.449 270389 ERROR neutron.agent.linux.utils [None req-a9146fa9-0b79-47b3-be48-065fe992d3f0 - - - - - -] Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 333257]; Stdin: ; Stdout: Tue Oct 14 10:15:51 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/333257/cgroup' for reading: No such file or directory Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: Error: no names or ids specified Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: Error: you must provide at least one name or id Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: #033[00m Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent [None req-a9146fa9-0b79-47b3-be48-065fe992d3f0 - - - - - -] Unable to reload_allocations dhcp for ad377052-7a70-4723-8afc-3b9c2f0a726f.: neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 333257]; Stdin: ; Stdout: Tue Oct 14 10:15:51 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/333257/cgroup' for reading: No such file or directory Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: Error: no names or ids specified Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: Error: you must provide at least one name or id Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 671, in reload_allocations Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent self._spawn_or_reload_process(reload_with_HUP=True) Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 603, in _spawn_or_reload_process Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent pm.enable(reload_cfg=reload_with_HUP, ensure_active=True) Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 108, in enable Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent self.reload_cfg() Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 117, in reload_cfg Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent self.disable('HUP', delete_pid_file=False) Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 132, in disable Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent utils.execute(cmd, addl_env=self.cmd_addl_env, Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py", line 156, in execute Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent raise exceptions.ProcessExecutionError(msg, Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 333257]; Stdin: ; Stdout: Tue Oct 14 10:15:51 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/333257/cgroup' for reading: No such file or directory Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent Error: no names or ids specified Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent Error: you must provide at least one name or id Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.450 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:51 localhost podman[333304]: 2025-10-14 10:15:51.4676328 +0000 UTC m=+0.075948452 container remove 6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:15:51 localhost podman[333337]: Oct 14 06:15:51 localhost podman[333337]: 2025-10-14 10:15:51.579978758 +0000 UTC m=+0.092895272 container create a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:15:51 localhost systemd[1]: Started libpod-conmon-a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db.scope. Oct 14 06:15:51 localhost systemd[1]: Started libcrun container. Oct 14 06:15:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d532ab22148fc94eca98d5438d50b994e180430f184e3b27b20ad1a0d1badeb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:51 localhost podman[333337]: 2025-10-14 10:15:51.535660539 +0000 UTC m=+0.048577103 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:51 localhost podman[333337]: 2025-10-14 10:15:51.642218204 +0000 UTC m=+0.155134718 container init a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:15:51 localhost podman[333337]: 2025-10-14 10:15:51.650981147 +0000 UTC m=+0.163897671 container start a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:51 localhost dnsmasq[333355]: started, version 2.85 cachesize 150 Oct 14 06:15:51 localhost dnsmasq[333355]: DNS service limited to local subnets Oct 14 06:15:51 localhost dnsmasq[333355]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:51 localhost dnsmasq[333355]: warning: no upstream servers configured Oct 14 06:15:51 localhost dnsmasq[333355]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.710 270389 INFO neutron.agent.dhcp.agent [None req-9d5d168e-0e32-4666-9d66-1a100f2b62dc - - - - - -] DHCP configuration for ports {'c157a1f1-e033-4032-8d5e-d3eb94cc40fe', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9', '143b3897-f3fa-456b-9edc-636bc769c8ed'} is completed#033[00m Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.713 270389 INFO neutron.agent.dhcp.agent [None req-e304a4ba-3844-4959-8265-3c8915eb9b3a - - - - - -] Synchronizing state#033[00m Oct 14 06:15:51 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:51.871 2 INFO neutron.agent.securitygroups_rpc [None req-33978405-9b76-48c0-9344-3eec93a44254 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:51.894 270389 INFO neutron.agent.dhcp.agent [None req-94dd924a-c6a6-4270-97a7-e1e3f64b813b - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'c157a1f1-e033-4032-8d5e-d3eb94cc40fe'} is completed#033[00m Oct 14 06:15:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:52.001 270389 INFO neutron.agent.dhcp.agent [None req-b5595c66-cfd8-467f-b9a2-353d4d3e47ea - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:52.002 270389 INFO neutron.agent.dhcp.agent [-] Starting network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:52 localhost systemd[1]: tmp-crun.5EAzqU.mount: Deactivated successfully. Oct 14 06:15:52 localhost systemd[1]: var-lib-containers-storage-overlay-23f0e02ca3b4e9af3fe019055fb4111b8976b2e6ddbe608fe1c00245a8183cde-merged.mount: Deactivated successfully. Oct 14 06:15:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b6b7c2deeb9058baf4824da32cfaf6439d30fcc8f2c51dd040bbdd05b3226d9-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:52 localhost podman[333404]: Oct 14 06:15:52 localhost podman[333404]: 2025-10-14 10:15:52.79870144 +0000 UTC m=+0.085442834 container create 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:15:52 localhost podman[333404]: 2025-10-14 10:15:52.760030101 +0000 UTC m=+0.046771565 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:52 localhost systemd[1]: Started libpod-conmon-0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d.scope. Oct 14 06:15:52 localhost systemd[1]: Started libcrun container. Oct 14 06:15:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a70ec2b54db3bb97381e3d471f74aaeb810fa205735960744f66605a979eab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:52 localhost podman[333404]: 2025-10-14 10:15:52.901506496 +0000 UTC m=+0.188247900 container init 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:52 localhost podman[333404]: 2025-10-14 10:15:52.909830497 +0000 UTC m=+0.196571901 container start 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:15:52 localhost dnsmasq[333423]: started, version 2.85 cachesize 150 Oct 14 06:15:52 localhost dnsmasq[333423]: DNS service limited to local subnets Oct 14 06:15:52 localhost dnsmasq[333423]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:52 localhost dnsmasq[333423]: warning: no upstream servers configured Oct 14 06:15:52 localhost dnsmasq-dhcp[333423]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:52 localhost dnsmasq-dhcp[333423]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Oct 14 06:15:52 localhost dnsmasq-dhcp[333423]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:52 localhost dnsmasq[333423]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:52 localhost dnsmasq-dhcp[333423]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:52 localhost dnsmasq-dhcp[333423]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:52 localhost nova_compute[295778]: 2025-10-14 10:15:52.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:52.981 270389 INFO neutron.agent.dhcp.agent [-] Finished network ad377052-7a70-4723-8afc-3b9c2f0a726f dhcp configuration#033[00m Oct 14 06:15:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:52.982 270389 INFO neutron.agent.dhcp.agent [None req-b5595c66-cfd8-467f-b9a2-353d4d3e47ea - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:53 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:53.155 2 INFO neutron.agent.securitygroups_rpc [None req-24b63eb0-8d95-4362-967a-7a391c8c282f da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:53 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:53.229 2 INFO neutron.agent.securitygroups_rpc [None req-f2964234-381e-4863-ab6a-4afc28efe02d 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.317 270389 INFO neutron.agent.dhcp.agent [None req-abe94d67-a33d-4b4b-95f2-42045ed3c060 - - - - - -] DHCP configuration for ports {'143b3897-f3fa-456b-9edc-636bc769c8ed', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:53 localhost podman[333456]: 2025-10-14 10:15:53.427528949 +0000 UTC m=+0.063310244 container kill a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:53 localhost dnsmasq[333355]: exiting on receipt of SIGTERM Oct 14 06:15:53 localhost systemd[1]: libpod-a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db.scope: Deactivated successfully. Oct 14 06:15:53 localhost dnsmasq[333423]: exiting on receipt of SIGTERM Oct 14 06:15:53 localhost podman[333472]: 2025-10-14 10:15:53.514802622 +0000 UTC m=+0.076199429 container kill 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:53 localhost systemd[1]: libpod-0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d.scope: Deactivated successfully. Oct 14 06:15:53 localhost podman[333481]: 2025-10-14 10:15:53.564287278 +0000 UTC m=+0.108680473 container died a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:15:53 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:53.570 2 INFO neutron.agent.securitygroups_rpc [None req-498638c2-b9d1-4b78-9943-dc4bba565a59 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:53 localhost podman[333506]: 2025-10-14 10:15:53.591915523 +0000 UTC m=+0.058600350 container died 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:15:53 localhost podman[333481]: 2025-10-14 10:15:53.660800026 +0000 UTC m=+0.205193181 container remove a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:53 localhost systemd[1]: libpod-conmon-a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db.scope: Deactivated successfully. Oct 14 06:15:53 localhost podman[333506]: 2025-10-14 10:15:53.709267425 +0000 UTC m=+0.175952172 container remove 0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:15:53 localhost systemd[1]: libpod-conmon-0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d.scope: Deactivated successfully. Oct 14 06:15:53 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:53.856 2 INFO neutron.agent.securitygroups_rpc [None req-72716ee2-7b94-43d2-a8a0-d7da6ac1202b da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent [None req-c4a69794-0b2a-41a8-a759-13b40ebf9337 - - - - - -] Unable to restart dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 23a1c8e5-caa4-4fc7-95bf-c1b74dc1a992: This subnet is being modified by another concurrent operation. Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n return self._port_action(plugin, context, port, \'update_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n return plugin.update_port(context, port[\'id\'], port)\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n updated_port = super(Ml2Plugin, self).update_port(context, id,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n self.ipam.update_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n changes = self.update_port_with_ips(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n changes = self._update_ips_for_port(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n subnets = self._ipam_get_subnets(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n subnet.read_lock_register(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 23a1c8e5-caa4-4fc7-95bf-c1b74dc1a992: This subnet is being modified by another concurrent operation.\n']. Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 207, in restart Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent self.enable() Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent common_utils.wait_until_true(self._enable, timeout=300) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent while not predicate(): Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent interface_name = self.device_manager.setup( Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent self.cleanup_stale_devices(network, dhcp_port=None) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent self.force_reraise() Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent raise self.value Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent port = self.setup_dhcp_port(network, segment) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent dhcp_port = setup_method(network, device_id, dhcp_subnets) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1660, in _setup_existing_dhcp_port Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent port = self.plugin.update_dhcp_port( Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 901, in update_dhcp_port Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent port = cctxt.call(self.context, 'update_dhcp_port', Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent return self._original_context.call(ctxt, method, **kwargs) Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent result = self.transport._send( Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent return self._driver.send(target, ctxt, message, Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent return self._send(target, ctxt, message, wait_for_reply, timeout, Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent raise result Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 23a1c8e5-caa4-4fc7-95bf-c1b74dc1a992: This subnet is being modified by another concurrent operation. Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n return self._port_action(plugin, context, port, \'update_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n return plugin.update_port(context, port[\'id\'], port)\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n updated_port = super(Ml2Plugin, self).update_port(context, id,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n self.ipam.update_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n changes = self.update_port_with_ips(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n changes = self._update_ips_for_port(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n subnets = self._ipam_get_subnets(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n subnet.read_lock_register(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 23a1c8e5-caa4-4fc7-95bf-c1b74dc1a992: This subnet is being modified by another concurrent operation.\n']. Oct 14 06:15:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:53.988 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:15:54 localhost systemd[1]: tmp-crun.lLCkDk.mount: Deactivated successfully. Oct 14 06:15:54 localhost systemd[1]: var-lib-containers-storage-overlay-d3a70ec2b54db3bb97381e3d471f74aaeb810fa205735960744f66605a979eab-merged.mount: Deactivated successfully. Oct 14 06:15:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0363cf8759dfdff9d16561b3d05dd9d323d70ec756dc11a86c170b5eb810db4d-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:54 localhost systemd[1]: var-lib-containers-storage-overlay-d532ab22148fc94eca98d5438d50b994e180430f184e3b27b20ad1a0d1badeb4-merged.mount: Deactivated successfully. Oct 14 06:15:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9869cae1873752e582a11625149ac942369465d2790ba318da46265d058a1db-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:54 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:54.292 270389 INFO neutron.agent.dhcp.agent [None req-1fe15c03-0639-4f7c-9db3-fea81f59d7c9 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7', 'bda62574-f854-488d-85a1-cd1f7d3785f6'} is completed#033[00m Oct 14 06:15:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:54.341 2 INFO neutron.agent.securitygroups_rpc [None req-2da02acd-ddd6-4284-9158-a3cd85597c4e da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:54 localhost podman[333585]: Oct 14 06:15:54 localhost podman[333585]: 2025-10-14 10:15:54.562442773 +0000 UTC m=+0.089238635 container create 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:15:54 localhost systemd[1]: Started libpod-conmon-6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9.scope. Oct 14 06:15:54 localhost podman[333585]: 2025-10-14 10:15:54.519798568 +0000 UTC m=+0.046594460 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:54 localhost systemd[1]: Started libcrun container. Oct 14 06:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff336e878160889e673bbbac185285920568843c05d56d75ba5315e0d2d689d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:54 localhost podman[333585]: 2025-10-14 10:15:54.64240899 +0000 UTC m=+0.169204852 container init 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:15:54 localhost podman[333585]: 2025-10-14 10:15:54.653008532 +0000 UTC m=+0.179804394 container start 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:15:54 localhost dnsmasq[333603]: started, version 2.85 cachesize 150 Oct 14 06:15:54 localhost dnsmasq[333603]: DNS service limited to local subnets Oct 14 06:15:54 localhost dnsmasq[333603]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:54 localhost dnsmasq[333603]: warning: no upstream servers configured Oct 14 06:15:54 localhost dnsmasq-dhcp[333603]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:15:54 localhost dnsmasq-dhcp[333603]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Oct 14 06:15:54 localhost dnsmasq[333603]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/addn_hosts - 0 addresses Oct 14 06:15:54 localhost dnsmasq-dhcp[333603]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/host Oct 14 06:15:54 localhost dnsmasq-dhcp[333603]: read /var/lib/neutron/dhcp/ad377052-7a70-4723-8afc-3b9c2f0a726f/opts Oct 14 06:15:54 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:54.711 270389 INFO neutron.agent.dhcp.agent [None req-b5595c66-cfd8-467f-b9a2-353d4d3e47ea - - - - - -] Synchronizing state#033[00m Oct 14 06:15:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:15:55 localhost podman[333604]: 2025-10-14 10:15:55.044058075 +0000 UTC m=+0.081788457 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:55 localhost podman[333604]: 2025-10-14 10:15:55.060235975 +0000 UTC m=+0.097966367 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:15:55 localhost podman[333605]: 2025-10-14 10:15:55.101504773 +0000 UTC m=+0.135148846 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd) Oct 14 06:15:55 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:15:55 localhost podman[333605]: 2025-10-14 10:15:55.142247708 +0000 UTC m=+0.175891771 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:15:55 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:15:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:55 localhost nova_compute[295778]: 2025-10-14 10:15:55.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:55.329 2 INFO neutron.agent.securitygroups_rpc [None req-d60dd72d-6d8e-4c14-abe5-931d143b618a da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.366 270389 INFO neutron.agent.dhcp.agent [None req-f19b1a94-fe4d-433a-86b4-7db4b43248c1 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.367 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.367 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.368 270389 INFO neutron.agent.dhcp.agent [None req-f19b1a94-fe4d-433a-86b4-7db4b43248c1 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.370 270389 INFO neutron.agent.dhcp.agent [None req-c4a69794-0b2a-41a8-a759-13b40ebf9337 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:51Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=bda62574-f854-488d-85a1-cd1f7d3785f6, ip_allocation=immediate, mac_address=fa:16:3e:67:94:5d, name=tempest-NetworksTestDHCPv6-1918099425, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=23, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['23a1c8e5-caa4-4fc7-95bf-c1b74dc1a992', 'cc82f46e-9942-4550-b6ef-fa6fad199abe'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:50Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1873, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:51Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.397 270389 INFO neutron.agent.dhcp.agent [None req-18152228-a9ef-472a-8c8b-a605bfeb7f8d - - - - - -] DHCP configuration for ports {'143b3897-f3fa-456b-9edc-636bc769c8ed', '4e355359-04e1-474e-9fca-5892b54dbee2', 'f62e0da8-fb0c-4930-a904-cbdda6127bc9'} is completed#033[00m Oct 14 06:15:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:55.542 2 INFO neutron.agent.securitygroups_rpc [None req-eeb0ba3d-bd8e-48e3-811c-a42098bbbefd 30647d4700b846dba79efd27fad03f3d a840994a70374548889747682f4c0fa3 - - default default] Security group member updated ['59283390-a499-4358-9f49-155fd8075ea9']#033[00m Oct 14 06:15:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:55.612 270389 INFO neutron.agent.dhcp.agent [None req-c3143e04-7f7b-4166-b517-a9b696450efa - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:15:55 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:55.708 2 INFO neutron.agent.securitygroups_rpc [None req-2e7436e6-7cca-4505-95cf-dd9c675a1826 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:55 localhost dnsmasq[333603]: exiting on receipt of SIGTERM Oct 14 06:15:55 localhost podman[333670]: 2025-10-14 10:15:55.765907549 +0000 UTC m=+0.069326165 container kill 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:15:55 localhost systemd[1]: libpod-6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9.scope: Deactivated successfully. Oct 14 06:15:55 localhost podman[333702]: 2025-10-14 10:15:55.836986891 +0000 UTC m=+0.057635475 container died 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:15:55 localhost podman[333693]: Oct 14 06:15:55 localhost podman[333693]: 2025-10-14 10:15:55.876549843 +0000 UTC m=+0.113614074 container create 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:15:55 localhost systemd[1]: Started libpod-conmon-66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f.scope. Oct 14 06:15:55 localhost podman[333693]: 2025-10-14 10:15:55.819753171 +0000 UTC m=+0.056817482 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:55 localhost systemd[1]: Started libcrun container. Oct 14 06:15:55 localhost podman[333702]: 2025-10-14 10:15:55.922960397 +0000 UTC m=+0.143608921 container cleanup 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:15:55 localhost systemd[1]: libpod-conmon-6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9.scope: Deactivated successfully. Oct 14 06:15:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be4655dd40f4b7e748676d4a4826c3715501527ba9eea77caa7c36edfd2f83a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:55 localhost podman[333693]: 2025-10-14 10:15:55.937512714 +0000 UTC m=+0.174576945 container init 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:55 localhost podman[333693]: 2025-10-14 10:15:55.9471217 +0000 UTC m=+0.184185931 container start 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:15:55 localhost dnsmasq[333744]: started, version 2.85 cachesize 150 Oct 14 06:15:55 localhost dnsmasq[333744]: DNS service limited to local subnets Oct 14 06:15:55 localhost dnsmasq[333744]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:55 localhost dnsmasq[333744]: warning: no upstream servers configured Oct 14 06:15:55 localhost dnsmasq[333744]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:15:55 localhost podman[333709]: 2025-10-14 10:15:55.999348289 +0000 UTC m=+0.202639561 container remove 6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ad377052-7a70-4723-8afc-3b9c2f0a726f, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:15:56 localhost nova_compute[295778]: 2025-10-14 10:15:56.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:56 localhost kernel: device tap4e355359-04 left promiscuous mode Oct 14 06:15:56 localhost ovn_controller[156286]: 2025-10-14T10:15:56Z|00266|binding|INFO|Releasing lport 4e355359-04e1-474e-9fca-5892b54dbee2 from this chassis (sb_readonly=0) Oct 14 06:15:56 localhost ovn_controller[156286]: 2025-10-14T10:15:56Z|00267|binding|INFO|Setting lport 4e355359-04e1-474e-9fca-5892b54dbee2 down in Southbound Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.025 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-ad377052-7a70-4723-8afc-3b9c2f0a726f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad377052-7a70-4723-8afc-3b9c2f0a726f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a840994a70374548889747682f4c0fa3', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb73290e-12c9-47a8-9645-19f3cd18f1a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4e355359-04e1-474e-9fca-5892b54dbee2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.028 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 4e355359-04e1-474e-9fca-5892b54dbee2 in datapath ad377052-7a70-4723-8afc-3b9c2f0a726f unbound from our chassis#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.030 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ad377052-7a70-4723-8afc-3b9c2f0a726f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.031 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[47b2a1db-4775-4573-bc4f-964a54e7b6ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:56 localhost nova_compute[295778]: 2025-10-14 10:15:56.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:56 localhost systemd[1]: var-lib-containers-storage-overlay-ff336e878160889e673bbbac185285920568843c05d56d75ba5315e0d2d689d0-merged.mount: Deactivated successfully. Oct 14 06:15:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b41fbe7303e102cbe62b195c0056704c3022de00be8db4ef6ecd387698effa9-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:56.190 2 INFO neutron.agent.securitygroups_rpc [None req-ad06b835-86f2-46d5-a890-f849c23f39ec da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.258 270389 INFO neutron.agent.dhcp.agent [None req-9c054496-6d1c-4e29-a1e6-7a3f4100b786 - - - - - -] DHCP configuration for ports {'bda62574-f854-488d-85a1-cd1f7d3785f6'} is completed#033[00m Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.387 270389 INFO neutron.agent.dhcp.agent [None req-494b457e-1a41-4779-9215-eb36617ac078 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.388 270389 INFO neutron.agent.dhcp.agent [None req-494b457e-1a41-4779-9215-eb36617ac078 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:56 localhost systemd[1]: run-netns-qdhcp\x2dad377052\x2d7a70\x2d4723\x2d8afc\x2d3b9c2f0a726f.mount: Deactivated successfully. Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.388 270389 INFO neutron.agent.dhcp.agent [None req-494b457e-1a41-4779-9215-eb36617ac078 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.508 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:15:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:56.535 2 INFO neutron.agent.securitygroups_rpc [None req-9a0507c0-d895-4954-b467-687beeb58a32 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:56 localhost systemd[1]: tmp-crun.dq6xYt.mount: Deactivated successfully. Oct 14 06:15:56 localhost dnsmasq[333744]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:56 localhost podman[333771]: 2025-10-14 10:15:56.566049496 +0000 UTC m=+0.075035157 container kill 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.753 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.754 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.757 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.758 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:56.759 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c095a1e9-19c7-401b-a4b4-ed618e013800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:56 localhost nova_compute[295778]: 2025-10-14 10:15:56.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:56.960 2 INFO neutron.agent.securitygroups_rpc [None req-03cd8842-580c-49e4-9ac0-37b2fed98c6e da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:56.984 270389 INFO neutron.agent.dhcp.agent [None req-eb2d1e41-c384-4ad0-bd3d-16fdcde03338 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:15:57 localhost podman[333809]: 2025-10-14 10:15:57.1612281 +0000 UTC m=+0.050780422 container kill 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:15:57 localhost dnsmasq[333744]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:57 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:57.256 2 INFO neutron.agent.securitygroups_rpc [None req-f8d4a5b8-3724-4c86-9983-a6aa33844a19 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['d81eaca5-41d5-465a-ae37-475fd17fd0b7']#033[00m Oct 14 06:15:57 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:57.600 270389 INFO neutron.agent.dhcp.agent [None req-c01a4e35-441a-4c18-9ea9-c42bdd7d6962 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:15:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:57.639 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:15:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:57.640 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:15:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:57.640 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:15:57 localhost systemd[1]: tmp-crun.od6BI1.mount: Deactivated successfully. Oct 14 06:15:57 localhost dnsmasq[333744]: exiting on receipt of SIGTERM Oct 14 06:15:57 localhost podman[333847]: 2025-10-14 10:15:57.698987507 +0000 UTC m=+0.055840187 container kill 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:15:57 localhost systemd[1]: libpod-66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f.scope: Deactivated successfully. Oct 14 06:15:57 localhost podman[333860]: 2025-10-14 10:15:57.760942795 +0000 UTC m=+0.049583700 container died 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:15:57 localhost podman[333860]: 2025-10-14 10:15:57.79948851 +0000 UTC m=+0.088129375 container cleanup 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:15:57 localhost systemd[1]: libpod-conmon-66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f.scope: Deactivated successfully. Oct 14 06:15:57 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:57.928 2 INFO neutron.agent.securitygroups_rpc [None req-19a13449-e088-4f71-a3bc-b16d734311de da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['bd99b9ee-6283-4002-9bd9-0f280baab2b9']#033[00m Oct 14 06:15:57 localhost podman[333861]: 2025-10-14 10:15:57.939175557 +0000 UTC m=+0.220007095 container remove 66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 06:15:57 localhost nova_compute[295778]: 2025-10-14 10:15:57.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:15:58 localhost systemd[1]: var-lib-containers-storage-overlay-0be4655dd40f4b7e748676d4a4826c3715501527ba9eea77caa7c36edfd2f83a-merged.mount: Deactivated successfully. Oct 14 06:15:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66b55aaf50959e9cd84c761b421426e562d87f30b447bdd146aa3cb1c316d42f-userdata-shm.mount: Deactivated successfully. Oct 14 06:15:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:58.576 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:15:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:58.578 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:15:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:58.582 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:15:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:58.582 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:15:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:15:58.583 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[361ae78b-7edd-4171-96fc-bc5e54dd0f10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:15:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:15:59 localhost podman[333940]: Oct 14 06:15:59 localhost podman[333940]: 2025-10-14 10:15:59.453490784 +0000 UTC m=+0.091414074 container create fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:15:59 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:59.459 2 INFO neutron.agent.securitygroups_rpc [None req-748b12f7-75b2-42a7-bed1-ec166bf1086f bcbb7ceb87a845dd957d390724b3aa7b 260dac1713714ac8bb2b6f2a6df5daab - - default default] Security group member updated ['04031ec2-60f0-4ddf-a977-de00155ea50e']#033[00m Oct 14 06:15:59 localhost systemd[1]: Started libpod-conmon-fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795.scope. Oct 14 06:15:59 localhost podman[333940]: 2025-10-14 10:15:59.410401657 +0000 UTC m=+0.048324957 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:15:59 localhost systemd[1]: Started libcrun container. Oct 14 06:15:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f809afa19eaa6e1348eebadfff2693f6054350f039d6f7fa8309300a9257f7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:15:59 localhost podman[333940]: 2025-10-14 10:15:59.526492556 +0000 UTC m=+0.164415846 container init fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:15:59 localhost podman[333940]: 2025-10-14 10:15:59.535134676 +0000 UTC m=+0.173057986 container start fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:15:59 localhost dnsmasq[333958]: started, version 2.85 cachesize 150 Oct 14 06:15:59 localhost dnsmasq[333958]: DNS service limited to local subnets Oct 14 06:15:59 localhost dnsmasq[333958]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:15:59 localhost dnsmasq[333958]: warning: no upstream servers configured Oct 14 06:15:59 localhost dnsmasq-dhcp[333958]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:15:59 localhost dnsmasq-dhcp[333958]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:15:59 localhost dnsmasq[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:15:59 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:15:59 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:15:59 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:59.730 2 INFO neutron.agent.securitygroups_rpc [None req-a482116e-ce6a-49ae-9cd1-b4bde06df35b 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:15:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:15:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:59.783 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:15:59Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=bf2750f4-176b-4439-8499-f6b76c012d8c, ip_allocation=immediate, mac_address=fa:16:3e:13:9f:3f, name=tempest-NetworksTestDHCPv6-161400413, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=27, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['228727d4-527f-4ce8-a24d-c71e122e59d0', 'f7da1e9e-e554-48c6-b7fa-d919e5495b07'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:57Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=1946, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:15:59Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:15:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:15:59.933 270389 INFO neutron.agent.dhcp.agent [None req-7aac6113-8355-4808-a298-5308a1fc5511 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:15:59 localhost neutron_sriov_agent[263389]: 2025-10-14 10:15:59.937 2 INFO neutron.agent.securitygroups_rpc [None req-9f851c30-a877-40be-a2bf-a08637c72ba4 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['647ac1cf-251c-49bd-bd44-f4aca2680cd7']#033[00m Oct 14 06:16:00 localhost dnsmasq[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:16:00 localhost podman[333977]: 2025-10-14 10:16:00.106253339 +0000 UTC m=+0.061835717 container kill fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0) Oct 14 06:16:00 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:00 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:00 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:00.157 2 INFO neutron.agent.securitygroups_rpc [None req-7f0d63ad-e0c7-474e-b522-3c3bf7ffc024 1bd6c282bd5f479c9ccbe1c6315d2b30 144ffc90564548b79f70d01b768b605c - - default default] Security group member updated ['e1fabd25-5362-4883-952c-8d61e716234f']#033[00m Oct 14 06:16:00 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:00.162 2 INFO neutron.agent.securitygroups_rpc [None req-796fb526-3ce5-4f0a-baa4-d73f37bca3e3 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['647ac1cf-251c-49bd-bd44-f4aca2680cd7']#033[00m Oct 14 06:16:00 localhost nova_compute[295778]: 2025-10-14 10:16:00.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:00.407 270389 INFO neutron.agent.dhcp.agent [None req-7eadbb9d-c576-45cc-b821-00c523ef04e9 - - - - - -] DHCP configuration for ports {'bf2750f4-176b-4439-8499-f6b76c012d8c'} is completed#033[00m Oct 14 06:16:00 localhost podman[246584]: time="2025-10-14T10:16:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:16:00 localhost podman[246584]: @ - - [14/Oct/2025:10:16:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146409 "" "Go-http-client/1.1" Oct 14 06:16:00 localhost podman[246584]: @ - - [14/Oct/2025:10:16:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19345 "" "Go-http-client/1.1" Oct 14 06:16:00 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:00.770 2 INFO neutron.agent.securitygroups_rpc [None req-db8e9931-3790-4c5a-8b7e-409a585f1b0b 1bd6c282bd5f479c9ccbe1c6315d2b30 144ffc90564548b79f70d01b768b605c - - default default] Security group member updated ['e1fabd25-5362-4883-952c-8d61e716234f']#033[00m Oct 14 06:16:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:16:01 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:01.984 270389 INFO neutron.agent.dhcp.agent [None req-f19b1a94-fe4d-433a-86b4-7db4b43248c1 - - - - - -] Synchronizing state#033[00m Oct 14 06:16:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:02.017 2 INFO neutron.agent.securitygroups_rpc [None req-aad7157c-9e35-462e-8976-eab83204c7ad 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:02.107 2 INFO neutron.agent.securitygroups_rpc [None req-5266bc6a-e668-45a5-ad9f-f03190787ee1 bcbb7ceb87a845dd957d390724b3aa7b 260dac1713714ac8bb2b6f2a6df5daab - - default default] Security group member updated ['04031ec2-60f0-4ddf-a977-de00155ea50e']#033[00m Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.160 270389 INFO neutron.agent.dhcp.agent [None req-839e85df-74c5-4c76-95e1-e7108c3067f2 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.161 270389 INFO neutron.agent.dhcp.agent [-] Starting network 43895bac-b789-42bc-b201-b422c5192247 dhcp configuration#033[00m Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.162 270389 INFO neutron.agent.dhcp.agent [-] Finished network 43895bac-b789-42bc-b201-b422c5192247 dhcp configuration#033[00m Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.162 270389 INFO neutron.agent.dhcp.agent [None req-839e85df-74c5-4c76-95e1-e7108c3067f2 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.163 270389 INFO neutron.agent.dhcp.agent [None req-ac073982-bb3f-4fa4-be2b-f89b942e0db0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:02.264 2 INFO neutron.agent.securitygroups_rpc [None req-eba3964c-6077-4898-9ae9-96c06b6bd458 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['411e7128-6eb3-4bfe-814c-1d1cb5173c3b']#033[00m Oct 14 06:16:02 localhost dnsmasq[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:02 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:02 localhost dnsmasq-dhcp[333958]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:02 localhost podman[334018]: 2025-10-14 10:16:02.376885076 +0000 UTC m=+0.065353369 container kill fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:16:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:16:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:16:02 localhost podman[334034]: 2025-10-14 10:16:02.502293913 +0000 UTC m=+0.094288930 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller) Oct 14 06:16:02 localhost systemd[1]: tmp-crun.HvqsVZ.mount: Deactivated successfully. Oct 14 06:16:02 localhost podman[334033]: 2025-10-14 10:16:02.554896513 +0000 UTC m=+0.150374752 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, name=ubi9-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:16:02 localhost podman[334034]: 2025-10-14 10:16:02.571742161 +0000 UTC m=+0.163737068 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:02 localhost podman[334035]: 2025-10-14 10:16:02.569016758 +0000 UTC m=+0.156406132 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:16:02 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:16:02 localhost podman[334033]: 2025-10-14 10:16:02.590174601 +0000 UTC m=+0.185652840 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:16:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:02.599 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:02 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:16:02 localhost podman[334035]: 2025-10-14 10:16:02.657135832 +0000 UTC m=+0.244525216 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:16:02 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:16:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:02.797 2 INFO neutron.agent.securitygroups_rpc [None req-c6d26a90-39b6-4ac3-8245-cf5b676729dc da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['411e7128-6eb3-4bfe-814c-1d1cb5173c3b']#033[00m Oct 14 06:16:02 localhost nova_compute[295778]: 2025-10-14 10:16:02.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:03 localhost dnsmasq[333958]: exiting on receipt of SIGTERM Oct 14 06:16:03 localhost podman[334122]: 2025-10-14 10:16:03.099773748 +0000 UTC m=+0.061667832 container kill fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:03 localhost systemd[1]: libpod-fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795.scope: Deactivated successfully. Oct 14 06:16:03 localhost podman[334137]: 2025-10-14 10:16:03.169815371 +0000 UTC m=+0.047364501 container died fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:03 localhost podman[334137]: 2025-10-14 10:16:03.210501634 +0000 UTC m=+0.088050714 container remove fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:03 localhost systemd[1]: libpod-conmon-fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795.scope: Deactivated successfully. Oct 14 06:16:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:16:03 localhost openstack_network_exporter[248748]: ERROR 10:16:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:16:03 localhost openstack_network_exporter[248748]: ERROR 10:16:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:16:03 localhost openstack_network_exporter[248748]: ERROR 10:16:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:16:03 localhost openstack_network_exporter[248748]: ERROR 10:16:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:16:03 localhost openstack_network_exporter[248748]: Oct 14 06:16:03 localhost openstack_network_exporter[248748]: ERROR 10:16:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:16:03 localhost openstack_network_exporter[248748]: Oct 14 06:16:03 localhost systemd[1]: var-lib-containers-storage-overlay-0f809afa19eaa6e1348eebadfff2693f6054350f039d6f7fa8309300a9257f7c-merged.mount: Deactivated successfully. Oct 14 06:16:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc09dee5650fd1cc24ba8bbc951b0d0c60597a3cabdbda92f2a4fb9bb4a93795-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:04 localhost podman[334209]: Oct 14 06:16:04 localhost podman[334209]: 2025-10-14 10:16:04.19404614 +0000 UTC m=+0.074014701 container create 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:16:04 localhost systemd[1]: Started libpod-conmon-73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af.scope. Oct 14 06:16:04 localhost systemd[1]: tmp-crun.Dxzwci.mount: Deactivated successfully. Oct 14 06:16:04 localhost podman[334209]: 2025-10-14 10:16:04.160255431 +0000 UTC m=+0.040223972 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:04 localhost systemd[1]: Started libcrun container. Oct 14 06:16:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/020b8e344c9745d6b34dedaa6b556d23dba8c652389c3b663e2da7fa6de5e08b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:04 localhost podman[334209]: 2025-10-14 10:16:04.282099702 +0000 UTC m=+0.162068263 container init 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:16:04 localhost podman[334209]: 2025-10-14 10:16:04.29144359 +0000 UTC m=+0.171412161 container start 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:16:04 localhost dnsmasq[334228]: started, version 2.85 cachesize 150 Oct 14 06:16:04 localhost dnsmasq[334228]: DNS service limited to local subnets Oct 14 06:16:04 localhost dnsmasq[334228]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:04 localhost dnsmasq[334228]: warning: no upstream servers configured Oct 14 06:16:04 localhost dnsmasq-dhcp[334228]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:16:04 localhost dnsmasq[334228]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:04 localhost dnsmasq-dhcp[334228]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:04 localhost dnsmasq-dhcp[334228]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:04.548 2 INFO neutron.agent.securitygroups_rpc [None req-a9ac4c2b-3396-40cc-a43d-86ae8a282899 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:04.554 270389 INFO neutron.agent.dhcp.agent [None req-d61f4651-1c05-4c0e-8eb6-c11334e0510d - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:04 localhost dnsmasq[334228]: exiting on receipt of SIGTERM Oct 14 06:16:04 localhost podman[334245]: 2025-10-14 10:16:04.68720128 +0000 UTC m=+0.059306099 container kill 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:04 localhost systemd[1]: libpod-73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af.scope: Deactivated successfully. Oct 14 06:16:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:04 localhost podman[334258]: 2025-10-14 10:16:04.759278537 +0000 UTC m=+0.057623114 container died 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:16:04 localhost podman[334258]: 2025-10-14 10:16:04.789258934 +0000 UTC m=+0.087603471 container cleanup 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0) Oct 14 06:16:04 localhost systemd[1]: libpod-conmon-73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af.scope: Deactivated successfully. Oct 14 06:16:04 localhost podman[334260]: 2025-10-14 10:16:04.842493521 +0000 UTC m=+0.133323048 container remove 73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:16:05 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:05.030 2 INFO neutron.agent.securitygroups_rpc [None req-3c1ae6ba-b890-432a-83bd-25fd553cee0e da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:16:05 localhost nova_compute[295778]: 2025-10-14 10:16:05.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:05 localhost systemd[1]: var-lib-containers-storage-overlay-020b8e344c9745d6b34dedaa6b556d23dba8c652389c3b663e2da7fa6de5e08b-merged.mount: Deactivated successfully. Oct 14 06:16:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73ad53c41d6ccf5bc9561a4c4faf22655ad96f5c70dfb39e4eecebcebb65c1af-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:05 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:05.597 2 INFO neutron.agent.securitygroups_rpc [None req-f190560d-7f11-4e1c-a5d5-63828c94af63 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:05.706 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:05.708 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:16:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:05.711 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:05.712 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:05.713 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[76fdf0b2-2f2b-4772-bf34-0915bbd2d847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:06 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:06.161 2 INFO neutron.agent.securitygroups_rpc [None req-64203ab7-c72d-4d68-9ffa-dd58b94ea77b da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:06 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:06.634 2 INFO neutron.agent.securitygroups_rpc [None req-e7d6730b-f256-42fe-acb2-b345ff83af61 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:06 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:06.858 270389 INFO neutron.agent.linux.ip_lib [None req-9ae3427f-c02a-4520-a583-4d903309c649 - - - - - -] Device tap751bfe6e-ed cannot be used as it has no MAC address#033[00m Oct 14 06:16:06 localhost nova_compute[295778]: 2025-10-14 10:16:06.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:06 localhost kernel: device tap751bfe6e-ed entered promiscuous mode Oct 14 06:16:06 localhost NetworkManager[5972]: [1760436966.9041] manager: (tap751bfe6e-ed): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Oct 14 06:16:06 localhost nova_compute[295778]: 2025-10-14 10:16:06.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:06 localhost ovn_controller[156286]: 2025-10-14T10:16:06Z|00268|binding|INFO|Claiming lport 751bfe6e-ed21-4412-8a34-1ddae80aa076 for this chassis. Oct 14 06:16:06 localhost ovn_controller[156286]: 2025-10-14T10:16:06Z|00269|binding|INFO|751bfe6e-ed21-4412-8a34-1ddae80aa076: Claiming unknown Oct 14 06:16:06 localhost systemd-udevd[334322]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:06 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:06.915 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-2579b986-1ecd-41e1-9c29-23fe56d2546f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2579b986-1ecd-41e1-9c29-23fe56d2546f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af5bf630-6933-47f0-af34-6cb52eb844c9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=751bfe6e-ed21-4412-8a34-1ddae80aa076) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:06 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:06.919 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 751bfe6e-ed21-4412-8a34-1ddae80aa076 in datapath 2579b986-1ecd-41e1-9c29-23fe56d2546f bound to our chassis#033[00m Oct 14 06:16:06 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:06.922 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2579b986-1ecd-41e1-9c29-23fe56d2546f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:06 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:06.925 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[a339ba60-449f-43ca-b166-6b57c04934ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost ovn_controller[156286]: 2025-10-14T10:16:06Z|00270|binding|INFO|Setting lport 751bfe6e-ed21-4412-8a34-1ddae80aa076 ovn-installed in OVS Oct 14 06:16:06 localhost ovn_controller[156286]: 2025-10-14T10:16:06Z|00271|binding|INFO|Setting lport 751bfe6e-ed21-4412-8a34-1ddae80aa076 up in Southbound Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost nova_compute[295778]: 2025-10-14 10:16:06.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost journal[236030]: ethtool ioctl error on tap751bfe6e-ed: No such device Oct 14 06:16:06 localhost nova_compute[295778]: 2025-10-14 10:16:06.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:07 localhost nova_compute[295778]: 2025-10-14 10:16:07.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:16:07 localhost podman[334396]: Oct 14 06:16:07 localhost podman[334396]: 2025-10-14 10:16:07.668381881 +0000 UTC m=+0.080143224 container create 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:07 localhost systemd[1]: Started libpod-conmon-529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29.scope. Oct 14 06:16:07 localhost podman[334396]: 2025-10-14 10:16:07.632251209 +0000 UTC m=+0.044012542 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:07 localhost systemd[1]: Started libcrun container. Oct 14 06:16:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35390dd1d8fa64ddb47d0f3292b4b654406ed86a79875c89bf2a1761462e6ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:07 localhost podman[334396]: 2025-10-14 10:16:07.753363831 +0000 UTC m=+0.165125174 container init 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:16:07 localhost podman[334396]: 2025-10-14 10:16:07.762773242 +0000 UTC m=+0.174534585 container start 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:16:07 localhost dnsmasq[334431]: started, version 2.85 cachesize 150 Oct 14 06:16:07 localhost dnsmasq[334431]: DNS service limited to local subnets Oct 14 06:16:07 localhost dnsmasq[334431]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:07 localhost dnsmasq[334431]: warning: no upstream servers configured Oct 14 06:16:07 localhost dnsmasq-dhcp[334431]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:07 localhost dnsmasq[334431]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:07 localhost dnsmasq-dhcp[334431]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:07 localhost dnsmasq-dhcp[334431]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:07 localhost dnsmasq[334431]: exiting on receipt of SIGTERM Oct 14 06:16:07 localhost systemd[1]: libpod-529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29.scope: Deactivated successfully. Oct 14 06:16:07 localhost podman[334438]: 2025-10-14 10:16:07.83714807 +0000 UTC m=+0.050228327 container died 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:07 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:07.868 2 INFO neutron.agent.securitygroups_rpc [None req-16103675-3f14-40f2-b81a-2b89b2207f51 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['357eb12a-bd5c-457e-b498-fb7d07e886ba']#033[00m Oct 14 06:16:07 localhost podman[334438]: 2025-10-14 10:16:07.870653342 +0000 UTC m=+0.083733539 container cleanup 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:07 localhost podman[334451]: 2025-10-14 10:16:07.892663277 +0000 UTC m=+0.049049895 container cleanup 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:16:07 localhost systemd[1]: libpod-conmon-529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29.scope: Deactivated successfully. Oct 14 06:16:07 localhost nova_compute[295778]: 2025-10-14 10:16:07.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:07 localhost podman[334464]: 2025-10-14 10:16:07.98410848 +0000 UTC m=+0.099298143 container remove 529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:16:08 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:08.067 270389 INFO neutron.agent.dhcp.agent [None req-7fd5ac0a-94c9-4d7a-b188-b6d01086ebf3 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:08 localhost podman[334487]: Oct 14 06:16:08 localhost podman[334487]: 2025-10-14 10:16:08.086231287 +0000 UTC m=+0.084465258 container create b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:16:08 localhost systemd[1]: Started libpod-conmon-b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e.scope. Oct 14 06:16:08 localhost systemd[1]: Started libcrun container. Oct 14 06:16:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a760d61ce8c98fe911d15844236c388e72f112750a46ad110e822e4e852fcbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:08 localhost podman[334487]: 2025-10-14 10:16:08.047778603 +0000 UTC m=+0.046012565 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:08 localhost podman[334487]: 2025-10-14 10:16:08.14911685 +0000 UTC m=+0.147350791 container init b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:08 localhost podman[334487]: 2025-10-14 10:16:08.158632233 +0000 UTC m=+0.156866174 container start b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:16:08 localhost dnsmasq[334516]: started, version 2.85 cachesize 150 Oct 14 06:16:08 localhost dnsmasq[334516]: DNS service limited to local subnets Oct 14 06:16:08 localhost dnsmasq[334516]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:08 localhost dnsmasq[334516]: warning: no upstream servers configured Oct 14 06:16:08 localhost dnsmasq-dhcp[334516]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:08 localhost dnsmasq[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/addn_hosts - 0 addresses Oct 14 06:16:08 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/host Oct 14 06:16:08 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/opts Oct 14 06:16:08 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:08.368 270389 INFO neutron.agent.dhcp.agent [None req-78998d38-810b-402b-9f72-95c2c2221c90 - - - - - -] DHCP configuration for ports {'a09cccea-8aac-4d56-af50-3acfd516bc03'} is completed#033[00m Oct 14 06:16:08 localhost systemd[1]: var-lib-containers-storage-overlay-a35390dd1d8fa64ddb47d0f3292b4b654406ed86a79875c89bf2a1761462e6ad-merged.mount: Deactivated successfully. Oct 14 06:16:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-529bc54fde444fc7cdb57ea10a008797a08fdd24626db72f428db03d73fbcd29-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:08 localhost podman[334555]: Oct 14 06:16:08 localhost podman[334555]: 2025-10-14 10:16:08.826977254 +0000 UTC m=+0.095532674 container create 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:16:08 localhost systemd[1]: Started libpod-conmon-0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3.scope. Oct 14 06:16:08 localhost podman[334555]: 2025-10-14 10:16:08.785820739 +0000 UTC m=+0.054376179 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:08 localhost systemd[1]: Started libcrun container. Oct 14 06:16:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469fbb2665cf4deab856245e77b8429ac58888630d8a0a7180e9ab1009e4bd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:08 localhost podman[334555]: 2025-10-14 10:16:08.902323157 +0000 UTC m=+0.170878567 container init 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:16:08 localhost podman[334555]: 2025-10-14 10:16:08.911053691 +0000 UTC m=+0.179609131 container start 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:16:08 localhost dnsmasq[334573]: started, version 2.85 cachesize 150 Oct 14 06:16:08 localhost dnsmasq[334573]: DNS service limited to local subnets Oct 14 06:16:08 localhost dnsmasq[334573]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:08 localhost dnsmasq[334573]: warning: no upstream servers configured Oct 14 06:16:08 localhost dnsmasq-dhcp[334573]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:08 localhost dnsmasq[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:08 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:08 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:16:09 Oct 14 06:16:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:16:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:16:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'images', 'manila_data', 'manila_metadata', 'backups'] Oct 14 06:16:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:16:09 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:09.119 270389 INFO neutron.agent.dhcp.agent [None req-0dd89b8c-14a4-4d3b-b1da-89ed877e3472 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:16:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:16:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:16:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:09.572 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:09.575 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:16:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:09.578 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:09.579 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:09 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:09.580 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[781f8d3e-0037-488e-8ab2-c945a28c5c10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:09 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:09.810 2 INFO neutron.agent.securitygroups_rpc [None req-c1a66779-28ca-41ab-8a3a-9fc04bd41773 da88dc55c7044cbba38f975c7e0b048b ad642aabc86d4ac1b3d38b6fe087eb44 - - default default] Security group rule updated ['1b366e00-8855-4b43-9b4b-e7499389da43']#033[00m Oct 14 06:16:10 localhost nova_compute[295778]: 2025-10-14 10:16:10.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:10 localhost dnsmasq[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:10 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:10 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:10 localhost systemd[1]: tmp-crun.FIzCzd.mount: Deactivated successfully. Oct 14 06:16:10 localhost podman[334591]: 2025-10-14 10:16:10.326871757 +0000 UTC m=+0.065322289 container kill 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:10 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:10.712 270389 INFO neutron.agent.dhcp.agent [None req-295b6173-670e-4ffd-b106-cad800a8e07a - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:16:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:11.572 2 INFO neutron.agent.securitygroups_rpc [None req-61a21130-b43a-4480-a1a6-dff95d56ba6f 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:11 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:11.754 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:10Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=8e6e0db5-7968-40a4-bc06-0bbd14d41149, ip_allocation=immediate, mac_address=fa:16:3e:e8:50:e6, name=tempest-NetworksTestDHCPv6-1420857271, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=31, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['3602788f-8700-47d4-ade3-4c329e15058e', 'd044c25e-3646-491b-82bc-5bb7440c5fe8'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:06Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2031, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:11Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:11 localhost dnsmasq[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:16:11 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:11 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:11 localhost podman[334630]: 2025-10-14 10:16:11.991333598 +0000 UTC m=+0.061279042 container kill 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:12.259 270389 INFO neutron.agent.dhcp.agent [None req-79b0eacb-32c5-4059-a5ba-3f9b4116c077 - - - - - -] DHCP configuration for ports {'8e6e0db5-7968-40a4-bc06-0bbd14d41149'} is completed#033[00m Oct 14 06:16:13 localhost nova_compute[295778]: 2025-10-14 10:16:13.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Oct 14 06:16:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:14.494 2 INFO neutron.agent.securitygroups_rpc [None req-4e3d39ad-b026-4772-bd9e-9dbb5b15581b 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:14 localhost dnsmasq[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:14 localhost podman[334667]: 2025-10-14 10:16:14.738530874 +0000 UTC m=+0.063551272 container kill 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:16:14 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:14 localhost dnsmasq-dhcp[334573]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:16:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:14 localhost podman[334682]: 2025-10-14 10:16:14.859384399 +0000 UTC m=+0.084751516 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:14 localhost podman[334682]: 2025-10-14 10:16:14.875872367 +0000 UTC m=+0.101239504 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:16:14 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:16:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Oct 14 06:16:15 localhost nova_compute[295778]: 2025-10-14 10:16:15.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:15.668 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:15 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:15.669 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:16:15 localhost nova_compute[295778]: 2025-10-14 10:16:15.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:15 localhost nova_compute[295778]: 2025-10-14 10:16:15.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:15 localhost nova_compute[295778]: 2025-10-14 10:16:15.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:16:15 localhost nova_compute[295778]: 2025-10-14 10:16:15.940 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:16:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:16.462 270389 INFO neutron.agent.linux.ip_lib [None req-8e2daa23-effe-42fe-a60f-22d51e4f582e - - - - - -] Device tap23e1c194-73 cannot be used as it has no MAC address#033[00m Oct 14 06:16:16 localhost dnsmasq[334573]: exiting on receipt of SIGTERM Oct 14 06:16:16 localhost podman[334727]: 2025-10-14 10:16:16.471153548 +0000 UTC m=+0.075342176 container kill 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:16:16 localhost systemd[1]: libpod-0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3.scope: Deactivated successfully. Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost kernel: device tap23e1c194-73 entered promiscuous mode Oct 14 06:16:16 localhost NetworkManager[5972]: [1760436976.5276] manager: (tap23e1c194-73): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00272|binding|INFO|Claiming lport 23e1c194-7307-4419-b327-510181e0520f for this chassis. Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00273|binding|INFO|23e1c194-7307-4419-b327-510181e0520f: Claiming unknown Oct 14 06:16:16 localhost systemd-udevd[334767]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.539 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b3df6336-119f-4ceb-8286-b5fbbf09920b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b3df6336-119f-4ceb-8286-b5fbbf09920b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf1be3a6a454996a4414fad306906f1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e242c490-6d6b-4f00-b5f3-0df7926f9f2c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=23e1c194-7307-4419-b327-510181e0520f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.541 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 23e1c194-7307-4419-b327-510181e0520f in datapath b3df6336-119f-4ceb-8286-b5fbbf09920b bound to our chassis#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.543 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b3df6336-119f-4ceb-8286-b5fbbf09920b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.548 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5b485c-b522-4160-890e-a7e9f59fbe36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:16 localhost podman[334745]: 2025-10-14 10:16:16.550107878 +0000 UTC m=+0.063118040 container died 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:16 localhost systemd[1]: tmp-crun.R47TSb.mount: Deactivated successfully. Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00274|binding|INFO|Setting lport 23e1c194-7307-4419-b327-510181e0520f ovn-installed in OVS Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00275|binding|INFO|Setting lport 23e1c194-7307-4419-b327-510181e0520f up in Southbound Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost podman[334745]: 2025-10-14 10:16:16.593360209 +0000 UTC m=+0.106370351 container cleanup 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:16 localhost systemd[1]: libpod-conmon-0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3.scope: Deactivated successfully. Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost podman[334747]: 2025-10-14 10:16:16.637899574 +0000 UTC m=+0.145005789 container remove 0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:16.888 270389 INFO neutron.agent.linux.ip_lib [None req-48b194a7-c6ec-46e6-9f21-9614e4c6deab - - - - - -] Device tap97cf526a-ae cannot be used as it has no MAC address#033[00m Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost kernel: device tap97cf526a-ae entered promiscuous mode Oct 14 06:16:16 localhost NetworkManager[5972]: [1760436976.9428] manager: (tap97cf526a-ae): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Oct 14 06:16:16 localhost systemd-udevd[334769]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00276|binding|INFO|Claiming lport 97cf526a-ae98-4bc2-bd24-f3511b475392 for this chassis. Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00277|binding|INFO|97cf526a-ae98-4bc2-bd24-f3511b475392: Claiming unknown Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.958 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-3a7c0fe5-96d6-4107-a816-0bfeb02f7211', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a7c0fe5-96d6-4107-a816-0bfeb02f7211', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ede9100f-e6e2-42fd-9f07-ec4e0cf25a0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=97cf526a-ae98-4bc2-bd24-f3511b475392) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.960 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 97cf526a-ae98-4bc2-bd24-f3511b475392 in datapath 3a7c0fe5-96d6-4107-a816-0bfeb02f7211 bound to our chassis#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.963 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port b4ef1490-f028-4786-ba0d-f84b1304ca65 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.964 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a7c0fe5-96d6-4107-a816-0bfeb02f7211, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:16.965 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0434165a-e40d-46e1-9af1-f632bad6b2b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00278|binding|INFO|Setting lport 97cf526a-ae98-4bc2-bd24-f3511b475392 ovn-installed in OVS Oct 14 06:16:16 localhost ovn_controller[156286]: 2025-10-14T10:16:16Z|00279|binding|INFO|Setting lport 97cf526a-ae98-4bc2-bd24-f3511b475392 up in Southbound Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:16 localhost nova_compute[295778]: 2025-10-14 10:16:16.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:17.211 270389 INFO neutron.agent.linux.ip_lib [None req-78a82bcb-33bd-4679-8756-8e3490b7b1bc - - - - - -] Device tap6fd9907e-ef cannot be used as it has no MAC address#033[00m Oct 14 06:16:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost kernel: device tap6fd9907e-ef entered promiscuous mode Oct 14 06:16:17 localhost NetworkManager[5972]: [1760436977.2779] manager: (tap6fd9907e-ef): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost ovn_controller[156286]: 2025-10-14T10:16:17Z|00280|binding|INFO|Claiming lport 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa for this chassis. Oct 14 06:16:17 localhost ovn_controller[156286]: 2025-10-14T10:16:17Z|00281|binding|INFO|6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa: Claiming unknown Oct 14 06:16:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:17.292 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-dfdbdb17-6bbf-4fee-8769-34c2b86d2981', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfdbdb17-6bbf-4fee-8769-34c2b86d2981', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c845b6cc-56ab-48b0-bff8-a12356f33c56, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:17.295 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa in datapath dfdbdb17-6bbf-4fee-8769-34c2b86d2981 bound to our chassis#033[00m Oct 14 06:16:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:17.298 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfdbdb17-6bbf-4fee-8769-34c2b86d2981 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:17.300 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0b73962f-7dcc-4f3d-8f16-d2a6e68cc838]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:17 localhost ovn_controller[156286]: 2025-10-14T10:16:17Z|00282|binding|INFO|Setting lport 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa ovn-installed in OVS Oct 14 06:16:17 localhost ovn_controller[156286]: 2025-10-14T10:16:17Z|00283|binding|INFO|Setting lport 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa up in Southbound Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost systemd[1]: var-lib-containers-storage-overlay-e469fbb2665cf4deab856245e77b8429ac58888630d8a0a7180e9ab1009e4bd5-merged.mount: Deactivated successfully. Oct 14 06:16:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a0df678d754cd3c5b5bc4db8828e1493d81b52888ad96ee92041841e7c0c8c3-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost nova_compute[295778]: 2025-10-14 10:16:17.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:17.437 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:17 localhost podman[334933]: Oct 14 06:16:17 localhost podman[334933]: 2025-10-14 10:16:17.642902341 +0000 UTC m=+0.039225764 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:17 localhost podman[334933]: 2025-10-14 10:16:17.742239493 +0000 UTC m=+0.138562896 container create edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:16:17 localhost systemd[1]: Started libpod-conmon-edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439.scope. Oct 14 06:16:17 localhost systemd[1]: Started libcrun container. Oct 14 06:16:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f60fb398b8066c9970d1302dd0cfa5c5309a984171e41a883495db95b98bcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:17 localhost podman[334933]: 2025-10-14 10:16:17.817080255 +0000 UTC m=+0.213403648 container init edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:16:17 localhost podman[334965]: Oct 14 06:16:17 localhost podman[334965]: 2025-10-14 10:16:17.830523272 +0000 UTC m=+0.102964200 container create bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:17 localhost dnsmasq[334988]: started, version 2.85 cachesize 150 Oct 14 06:16:17 localhost dnsmasq[334988]: DNS service limited to local subnets Oct 14 06:16:17 localhost dnsmasq[334988]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:17 localhost dnsmasq[334988]: warning: no upstream servers configured Oct 14 06:16:17 localhost dnsmasq[334988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:17 localhost podman[334933]: 2025-10-14 10:16:17.847385161 +0000 UTC m=+0.243708554 container start edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:17 localhost systemd[1]: Started libpod-conmon-bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7.scope. Oct 14 06:16:17 localhost systemd[1]: Started libcrun container. Oct 14 06:16:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4ca8813dca59c5ee428e4878f3d6ba76beeb75031425e56d311908691b2eed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:17 localhost podman[334965]: 2025-10-14 10:16:17.789498281 +0000 UTC m=+0.061939219 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:17 localhost podman[334965]: 2025-10-14 10:16:17.894502174 +0000 UTC m=+0.166943092 container init bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:16:17 localhost podman[334965]: 2025-10-14 10:16:17.903228876 +0000 UTC m=+0.175669804 container start bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:16:17 localhost dnsmasq[334994]: started, version 2.85 cachesize 150 Oct 14 06:16:17 localhost dnsmasq[334994]: DNS service limited to local subnets Oct 14 06:16:17 localhost dnsmasq[334994]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:17 localhost dnsmasq[334994]: warning: no upstream servers configured Oct 14 06:16:17 localhost dnsmasq-dhcp[334994]: DHCP, static leases only on 10.103.0.0, lease time 1d Oct 14 06:16:17 localhost dnsmasq[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/addn_hosts - 0 addresses Oct 14 06:16:17 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/host Oct 14 06:16:17 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/opts Oct 14 06:16:18 localhost nova_compute[295778]: 2025-10-14 10:16:18.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:18.145 270389 INFO neutron.agent.dhcp.agent [None req-8eadcb1c-1e76-4559-b405-680cf2142d72 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:18 localhost podman[335021]: Oct 14 06:16:18 localhost podman[335021]: 2025-10-14 10:16:18.17512508 +0000 UTC m=+0.087801217 container create 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:16:18 localhost systemd[1]: Started libpod-conmon-49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62.scope. Oct 14 06:16:18 localhost systemd[1]: Started libcrun container. Oct 14 06:16:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4634e47a2cebf10366fe2d0be62f46efa865cb59572cb8cc262e80dfbd5502db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:18 localhost podman[335021]: 2025-10-14 10:16:18.130359109 +0000 UTC m=+0.043035216 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:18 localhost podman[335021]: 2025-10-14 10:16:18.233834631 +0000 UTC m=+0.146510768 container init 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:16:18 localhost dnsmasq[335082]: started, version 2.85 cachesize 150 Oct 14 06:16:18 localhost dnsmasq[335082]: DNS service limited to local subnets Oct 14 06:16:18 localhost dnsmasq[335082]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:18 localhost dnsmasq[335082]: warning: no upstream servers configured Oct 14 06:16:18 localhost dnsmasq-dhcp[335082]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:18 localhost dnsmasq[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/addn_hosts - 0 addresses Oct 14 06:16:18 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/host Oct 14 06:16:18 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/opts Oct 14 06:16:18 localhost dnsmasq[334988]: exiting on receipt of SIGTERM Oct 14 06:16:18 localhost systemd[1]: libpod-edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439.scope: Deactivated successfully. Oct 14 06:16:18 localhost podman[335049]: 2025-10-14 10:16:18.252911079 +0000 UTC m=+0.065679468 container kill edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:16:18 localhost podman[335021]: 2025-10-14 10:16:18.295504112 +0000 UTC m=+0.208180239 container start 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:16:18 localhost podman[335084]: 2025-10-14 10:16:18.329359173 +0000 UTC m=+0.063733316 container died edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:16:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:18.340 270389 INFO neutron.agent.dhcp.agent [None req-7170d93d-b4c0-4376-a77e-2dcbcabb6053 - - - - - -] DHCP configuration for ports {'852ced2f-7182-433e-955e-aa22d1934a9f'} is completed#033[00m Oct 14 06:16:18 localhost systemd[1]: tmp-crun.qEsFDJ.mount: Deactivated successfully. Oct 14 06:16:18 localhost systemd[1]: var-lib-containers-storage-overlay-78f60fb398b8066c9970d1302dd0cfa5c5309a984171e41a883495db95b98bcc-merged.mount: Deactivated successfully. Oct 14 06:16:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:18 localhost podman[335084]: 2025-10-14 10:16:18.378460449 +0000 UTC m=+0.112834582 container cleanup edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:18 localhost systemd[1]: libpod-conmon-edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439.scope: Deactivated successfully. Oct 14 06:16:18 localhost podman[335091]: 2025-10-14 10:16:18.403503005 +0000 UTC m=+0.122888650 container remove edea56e728a58ba6804f1d093f8c3ba118ef867c8e8b59bdba5740e3bcfdc439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:18.406 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:18.408 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:16:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:18.412 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:18.412 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:18.413 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[09dd844f-844c-4104-81bd-d38098def35f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:18 localhost podman[335112]: Oct 14 06:16:18 localhost podman[335112]: 2025-10-14 10:16:18.479076116 +0000 UTC m=+0.154144072 container create 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:16:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:18.503 270389 INFO neutron.agent.dhcp.agent [None req-b9dd2634-b807-405d-9f78-1cf869de8b98 - - - - - -] DHCP configuration for ports {'4a69a1f3-2c37-4c5f-a0ec-94a65172b8de'} is completed#033[00m Oct 14 06:16:18 localhost systemd[1]: Started libpod-conmon-0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809.scope. Oct 14 06:16:18 localhost systemd[1]: Started libcrun container. Oct 14 06:16:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c562fd72ba2302b41bca47dd3b24d460e89191a21f3491c27d7aa55c2edb059/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:18 localhost podman[335112]: 2025-10-14 10:16:18.537351987 +0000 UTC m=+0.212419943 container init 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:18 localhost podman[335112]: 2025-10-14 10:16:18.438883177 +0000 UTC m=+0.113951153 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:18 localhost podman[335112]: 2025-10-14 10:16:18.546547271 +0000 UTC m=+0.221615267 container start 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:16:18 localhost dnsmasq[335135]: started, version 2.85 cachesize 150 Oct 14 06:16:18 localhost dnsmasq[335135]: DNS service limited to local subnets Oct 14 06:16:18 localhost dnsmasq[335135]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:18 localhost dnsmasq[335135]: warning: no upstream servers configured Oct 14 06:16:18 localhost dnsmasq-dhcp[335135]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:18 localhost dnsmasq[335135]: read /var/lib/neutron/dhcp/dfdbdb17-6bbf-4fee-8769-34c2b86d2981/addn_hosts - 0 addresses Oct 14 06:16:18 localhost dnsmasq-dhcp[335135]: read /var/lib/neutron/dhcp/dfdbdb17-6bbf-4fee-8769-34c2b86d2981/host Oct 14 06:16:18 localhost dnsmasq-dhcp[335135]: read /var/lib/neutron/dhcp/dfdbdb17-6bbf-4fee-8769-34c2b86d2981/opts Oct 14 06:16:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:18.775 270389 INFO neutron.agent.dhcp.agent [None req-5d304c7b-be0f-4c41-a1d0-4026fc018707 - - - - - -] DHCP configuration for ports {'5b417eb3-ed12-4f8c-ba23-8bf5ddc90a19'} is completed#033[00m Oct 14 06:16:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Oct 14 06:16:19 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:19.660 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:18Z, description=, device_id=e051df2a-6c99-40d0-bcd5-cf988a4b8298, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c5839bcb-bc0a-49b7-9b57-1190b184ad7a, ip_allocation=immediate, mac_address=fa:16:3e:0d:4c:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:13Z, description=, dns_domain=, id=b3df6336-119f-4ceb-8286-b5fbbf09920b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-525327720, port_security_enabled=True, project_id=7bf1be3a6a454996a4414fad306906f1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21924, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2040, status=ACTIVE, subnets=['0e55522f-4ccb-4521-b7b3-1f5e90b428a5'], tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:16:14Z, vlan_transparent=None, network_id=b3df6336-119f-4ceb-8286-b5fbbf09920b, port_security_enabled=False, project_id=7bf1be3a6a454996a4414fad306906f1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2062, status=DOWN, tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:16:18Z on network b3df6336-119f-4ceb-8286-b5fbbf09920b#033[00m Oct 14 06:16:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:19.671 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:16:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:19 localhost podman[335191]: Oct 14 06:16:19 localhost podman[335191]: 2025-10-14 10:16:19.836006585 +0000 UTC m=+0.098186393 container create e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:16:19 localhost systemd[1]: Started libpod-conmon-e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57.scope. Oct 14 06:16:19 localhost podman[335191]: 2025-10-14 10:16:19.792236821 +0000 UTC m=+0.054416689 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:19 localhost dnsmasq[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/addn_hosts - 1 addresses Oct 14 06:16:19 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/host Oct 14 06:16:19 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/opts Oct 14 06:16:19 localhost podman[335215]: 2025-10-14 10:16:19.903978874 +0000 UTC m=+0.075019477 container kill bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:16:19 localhost systemd[1]: Started libcrun container. Oct 14 06:16:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ff617e4202b2e4d296eaedc2b004303110ff9fa99fccc48a12029d425f8bbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:19 localhost podman[335230]: 2025-10-14 10:16:19.965279565 +0000 UTC m=+0.087900900 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:16:19 localhost podman[335230]: 2025-10-14 10:16:19.978388593 +0000 UTC m=+0.101009888 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:16:19 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:16:20 localhost podman[335191]: 2025-10-14 10:16:20.03239216 +0000 UTC m=+0.294571948 container init e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:16:20 localhost podman[335191]: 2025-10-14 10:16:20.04178482 +0000 UTC m=+0.303964608 container start e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:16:20 localhost dnsmasq[335280]: started, version 2.85 cachesize 150 Oct 14 06:16:20 localhost dnsmasq[335280]: DNS service limited to local subnets Oct 14 06:16:20 localhost dnsmasq[335280]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:20 localhost dnsmasq[335280]: warning: no upstream servers configured Oct 14 06:16:20 localhost dnsmasq-dhcp[335280]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:20 localhost dnsmasq[335280]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:20 localhost dnsmasq-dhcp[335280]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:20 localhost dnsmasq-dhcp[335280]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:20 localhost podman[335224]: 2025-10-14 10:16:20.110046596 +0000 UTC m=+0.236280567 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:20 localhost podman[335224]: 2025-10-14 10:16:20.118152801 +0000 UTC m=+0.244386812 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:16:20 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:16:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:20.169 270389 INFO neutron.agent.dhcp.agent [None req-bf30e795-01bd-42ca-b818-04f23c4fb4ae - - - - - -] DHCP configuration for ports {'c5839bcb-bc0a-49b7-9b57-1190b184ad7a'} is completed#033[00m Oct 14 06:16:20 localhost systemd[1]: tmp-crun.XB44Sb.mount: Deactivated successfully. Oct 14 06:16:20 localhost nova_compute[295778]: 2025-10-14 10:16:20.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:20.503 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:19Z, description=, device_id=e707dd68-ed65-47c1-aba8-97f5082fcbee, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=99126604-7df5-48ce-9ef6-b69aca42913b, ip_allocation=immediate, mac_address=fa:16:3e:67:45:80, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:09Z, description=, dns_domain=, id=3a7c0fe5-96d6-4107-a816-0bfeb02f7211, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--2104529617, port_security_enabled=True, project_id=350a918b3c8b45c8b7f0665a734b2d1c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51411, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2014, status=ACTIVE, subnets=['7f54a92e-f5e6-47c2-a81f-d633f1294ab9'], tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:14Z, vlan_transparent=None, network_id=3a7c0fe5-96d6-4107-a816-0bfeb02f7211, port_security_enabled=False, project_id=350a918b3c8b45c8b7f0665a734b2d1c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2068, status=DOWN, tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:19Z on network 3a7c0fe5-96d6-4107-a816-0bfeb02f7211#033[00m Oct 14 06:16:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:20.646 270389 INFO neutron.agent.dhcp.agent [None req-4604168f-798c-44c0-978e-d392c893bdc1 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:20 localhost dnsmasq[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/addn_hosts - 1 addresses Oct 14 06:16:20 localhost podman[335301]: 2025-10-14 10:16:20.802838767 +0000 UTC m=+0.059542736 container kill 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:16:20 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/host Oct 14 06:16:20 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/opts Oct 14 06:16:20 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:20.825 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:20 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:20.826 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:16:20 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:20.829 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad9f95c0-875c-462b-9ab2-af240284b71b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:20 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:20.829 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:20 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:20.830 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[54be2d60-835c-43de-a482-b90611479e8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:21 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:21.084 270389 INFO neutron.agent.dhcp.agent [None req-f834764a-4889-44c0-ae86-9eba47eaad08 - - - - - -] DHCP configuration for ports {'99126604-7df5-48ce-9ef6-b69aca42913b'} is completed#033[00m Oct 14 06:16:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Oct 14 06:16:21 localhost dnsmasq[335280]: exiting on receipt of SIGTERM Oct 14 06:16:21 localhost podman[335339]: 2025-10-14 10:16:21.265134686 +0000 UTC m=+0.059613788 container kill e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:16:21 localhost systemd[1]: libpod-e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57.scope: Deactivated successfully. Oct 14 06:16:21 localhost podman[335354]: 2025-10-14 10:16:21.320496528 +0000 UTC m=+0.038091794 container died e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:16:21 localhost systemd[1]: tmp-crun.rCx7l1.mount: Deactivated successfully. Oct 14 06:16:21 localhost systemd[1]: var-lib-containers-storage-overlay-32ff617e4202b2e4d296eaedc2b004303110ff9fa99fccc48a12029d425f8bbb-merged.mount: Deactivated successfully. Oct 14 06:16:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:21 localhost podman[335354]: 2025-10-14 10:16:21.372509462 +0000 UTC m=+0.090104678 container remove e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:21 localhost systemd[1]: libpod-conmon-e7968073bf341205752d0f5fb59062239769bc28e8314ab71b165ba40bd87c57.scope: Deactivated successfully. Oct 14 06:16:21 localhost nova_compute[295778]: 2025-10-14 10:16:21.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.066 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.067 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.067 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.068 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.068 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:16:22 localhost podman[335433]: Oct 14 06:16:22 localhost podman[335433]: 2025-10-14 10:16:22.284432332 +0000 UTC m=+0.094580446 container create 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:16:22 localhost systemd[1]: Started libpod-conmon-5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a.scope. Oct 14 06:16:22 localhost podman[335433]: 2025-10-14 10:16:22.239532148 +0000 UTC m=+0.049680282 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:22 localhost systemd[1]: Started libcrun container. Oct 14 06:16:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099899396c646ea9e3c1553d94f402f3838d6b3ff939a588894db8516daf0ad9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:22 localhost systemd[1]: tmp-crun.tiOmEn.mount: Deactivated successfully. Oct 14 06:16:22 localhost podman[335433]: 2025-10-14 10:16:22.369142206 +0000 UTC m=+0.179290310 container init 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:22 localhost podman[335433]: 2025-10-14 10:16:22.378577387 +0000 UTC m=+0.188725501 container start 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:22 localhost dnsmasq[335470]: started, version 2.85 cachesize 150 Oct 14 06:16:22 localhost dnsmasq[335470]: DNS service limited to local subnets Oct 14 06:16:22 localhost dnsmasq[335470]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:22 localhost dnsmasq[335470]: warning: no upstream servers configured Oct 14 06:16:22 localhost dnsmasq-dhcp[335470]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:22 localhost dnsmasq-dhcp[335470]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:16:22 localhost dnsmasq[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:22 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:22 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:16:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3658649534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.585 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:16:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:22.656 270389 INFO neutron.agent.dhcp.agent [None req-3e6f03db-b704-4a7c-8e24-a42eb7d52be8 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.752 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.753 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11469MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.753 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.754 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.815 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.815 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.829 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.940 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.942 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:16:22 localhost nova_compute[295778]: 2025-10-14 10:16:22.973 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.006 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.023 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Oct 14 06:16:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:16:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/742175119' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:16:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:23.513 270389 INFO neutron.agent.linux.ip_lib [None req-78111bbf-7e2e-4396-80cc-22896e7b14f6 - - - - - -] Device tap47c7d829-3e cannot be used as it has no MAC address#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.530 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.537 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost kernel: device tap47c7d829-3e entered promiscuous mode Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost ovn_controller[156286]: 2025-10-14T10:16:23Z|00284|binding|INFO|Claiming lport 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 for this chassis. Oct 14 06:16:23 localhost ovn_controller[156286]: 2025-10-14T10:16:23Z|00285|binding|INFO|47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6: Claiming unknown Oct 14 06:16:23 localhost NetworkManager[5972]: [1760436983.5471] manager: (tap47c7d829-3e): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Oct 14 06:16:23 localhost systemd-udevd[335505]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:23 localhost ovn_controller[156286]: 2025-10-14T10:16:23Z|00286|binding|INFO|Setting lport 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 ovn-installed in OVS Oct 14 06:16:23 localhost ovn_controller[156286]: 2025-10-14T10:16:23Z|00287|binding|INFO|Setting lport 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 up in Southbound Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.557 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.559 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.560 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.562 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:23.557 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-0016c976-113d-4d60-ac56-d70da6169427', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0016c976-113d-4d60-ac56-d70da6169427', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ca5e1d577fe463aa89a13e320c6dd5f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9a0ba7d-e266-40f3-a470-8e74975ca13d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:23.560 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 in datapath 0016c976-113d-4d60-ac56-d70da6169427 bound to our chassis#033[00m Oct 14 06:16:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:23.563 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0016c976-113d-4d60-ac56-d70da6169427 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:23.564 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ea5d02-45d2-4897-ae74-413cd4cd88f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost journal[236030]: ethtool ioctl error on tap47c7d829-3e: No such device Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost nova_compute[295778]: 2025-10-14 10:16:23.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:23 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:23.716 2 INFO neutron.agent.securitygroups_rpc [None req-71ba91b5-684f-48d7-9c14-69fcb9a93319 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:23.822 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:22Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=b86aa866-5cb8-47ed-bd65-6c0566a329dc, ip_allocation=immediate, mac_address=fa:16:3e:c9:04:79, name=tempest-NetworksTestDHCPv6-1302131565, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=35, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['c24c5921-c147-476a-9c97-bd01bed520c3', 'f573db77-b02e-4074-bd0e-ac46aac7b7c8'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:18Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2094, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:23Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:24 localhost podman[335568]: 2025-10-14 10:16:24.05703747 +0000 UTC m=+0.054920241 container kill 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:16:24 localhost dnsmasq[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:16:24 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:24 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:24.141 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:18Z, description=, device_id=e051df2a-6c99-40d0-bcd5-cf988a4b8298, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c5839bcb-bc0a-49b7-9b57-1190b184ad7a, ip_allocation=immediate, mac_address=fa:16:3e:0d:4c:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:13Z, description=, dns_domain=, id=b3df6336-119f-4ceb-8286-b5fbbf09920b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-525327720, port_security_enabled=True, project_id=7bf1be3a6a454996a4414fad306906f1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21924, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2040, status=ACTIVE, subnets=['0e55522f-4ccb-4521-b7b3-1f5e90b428a5'], tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:16:14Z, vlan_transparent=None, network_id=b3df6336-119f-4ceb-8286-b5fbbf09920b, port_security_enabled=False, project_id=7bf1be3a6a454996a4414fad306906f1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2062, status=DOWN, tags=[], tenant_id=7bf1be3a6a454996a4414fad306906f1, updated_at=2025-10-14T10:16:18Z on network b3df6336-119f-4ceb-8286-b5fbbf09920b#033[00m Oct 14 06:16:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:24.306 270389 INFO neutron.agent.dhcp.agent [None req-b87883ff-1f50-4ff0-8d1c-382bcfb8c849 - - - - - -] DHCP configuration for ports {'b86aa866-5cb8-47ed-bd65-6c0566a329dc'} is completed#033[00m Oct 14 06:16:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:24.373 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port bfd5d1cc-6c68-4f19-af47-82378ab8ebba with type ""#033[00m Oct 14 06:16:24 localhost ovn_controller[156286]: 2025-10-14T10:16:24Z|00288|binding|INFO|Removing iface tap47c7d829-3e ovn-installed in OVS Oct 14 06:16:24 localhost ovn_controller[156286]: 2025-10-14T10:16:24Z|00289|binding|INFO|Removing lport 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 ovn-installed in OVS Oct 14 06:16:24 localhost systemd[1]: tmp-crun.eSqWTs.mount: Deactivated successfully. Oct 14 06:16:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:24.374 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-0016c976-113d-4d60-ac56-d70da6169427', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0016c976-113d-4d60-ac56-d70da6169427', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ca5e1d577fe463aa89a13e320c6dd5f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9a0ba7d-e266-40f3-a470-8e74975ca13d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:24.376 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 47c7d829-3e0b-4bd3-8ca4-e25aec3cf2d6 in datapath 0016c976-113d-4d60-ac56-d70da6169427 unbound from our chassis#033[00m Oct 14 06:16:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:24.378 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0016c976-113d-4d60-ac56-d70da6169427 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:24.415 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[561edbe9-4fe6-48f1-9228-d8cf9dc225d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:24 localhost nova_compute[295778]: 2025-10-14 10:16:24.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:24 localhost nova_compute[295778]: 2025-10-14 10:16:24.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:24 localhost dnsmasq[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/addn_hosts - 1 addresses Oct 14 06:16:24 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/host Oct 14 06:16:24 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/opts Oct 14 06:16:24 localhost podman[335625]: 2025-10-14 10:16:24.420423928 +0000 UTC m=+0.127679038 container kill bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:24 localhost podman[335643]: Oct 14 06:16:24 localhost podman[335643]: 2025-10-14 10:16:24.469295748 +0000 UTC m=+0.097472974 container create 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:24 localhost podman[335643]: 2025-10-14 10:16:24.427577058 +0000 UTC m=+0.055754324 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:24 localhost systemd[1]: Started libpod-conmon-46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc.scope. Oct 14 06:16:24 localhost systemd[1]: Started libcrun container. Oct 14 06:16:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44192a9455237af8abf0b3d60db632f3a342670a4c61b04b215c6c92d221949f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:24 localhost podman[335643]: 2025-10-14 10:16:24.563877434 +0000 UTC m=+0.192054670 container init 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:16:24 localhost podman[335643]: 2025-10-14 10:16:24.573374126 +0000 UTC m=+0.201551362 container start 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:16:24 localhost dnsmasq[335670]: started, version 2.85 cachesize 150 Oct 14 06:16:24 localhost dnsmasq[335670]: DNS service limited to local subnets Oct 14 06:16:24 localhost dnsmasq[335670]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:24 localhost dnsmasq[335670]: warning: no upstream servers configured Oct 14 06:16:24 localhost dnsmasq-dhcp[335670]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:16:24 localhost dnsmasq[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/addn_hosts - 0 addresses Oct 14 06:16:24 localhost dnsmasq-dhcp[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/host Oct 14 06:16:24 localhost dnsmasq-dhcp[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/opts Oct 14 06:16:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:24.651 270389 INFO neutron.agent.dhcp.agent [None req-ff737b07-9a60-40cc-983c-cb50c40e8838 - - - - - -] DHCP configuration for ports {'c5839bcb-bc0a-49b7-9b57-1190b184ad7a'} is completed#033[00m Oct 14 06:16:24 localhost nova_compute[295778]: 2025-10-14 10:16:24.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:24 localhost kernel: device tap47c7d829-3e left promiscuous mode Oct 14 06:16:24 localhost nova_compute[295778]: 2025-10-14 10:16:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:24.793 270389 INFO neutron.agent.dhcp.agent [None req-7eecd409-7d21-43ad-8d58-b1ce9127bde9 - - - - - -] DHCP configuration for ports {'101920b6-3c37-4db5-956f-c048085dd78c'} is completed#033[00m Oct 14 06:16:25 localhost nova_compute[295778]: 2025-10-14 10:16:25.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.103 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:19Z, description=, device_id=e707dd68-ed65-47c1-aba8-97f5082fcbee, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=99126604-7df5-48ce-9ef6-b69aca42913b, ip_allocation=immediate, mac_address=fa:16:3e:67:45:80, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:09Z, description=, dns_domain=, id=3a7c0fe5-96d6-4107-a816-0bfeb02f7211, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--2104529617, port_security_enabled=True, project_id=350a918b3c8b45c8b7f0665a734b2d1c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51411, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2014, status=ACTIVE, subnets=['7f54a92e-f5e6-47c2-a81f-d633f1294ab9'], tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:14Z, vlan_transparent=None, network_id=3a7c0fe5-96d6-4107-a816-0bfeb02f7211, port_security_enabled=False, project_id=350a918b3c8b45c8b7f0665a734b2d1c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2068, status=DOWN, tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:19Z on network 3a7c0fe5-96d6-4107-a816-0bfeb02f7211#033[00m Oct 14 06:16:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 14 06:16:25 localhost dnsmasq[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/addn_hosts - 1 addresses Oct 14 06:16:25 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/host Oct 14 06:16:25 localhost podman[335726]: 2025-10-14 10:16:25.31924767 +0000 UTC m=+0.059413481 container kill 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:25 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/opts Oct 14 06:16:25 localhost nova_compute[295778]: 2025-10-14 10:16:25.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:16:25 localhost podman[335758]: 2025-10-14 10:16:25.552759962 +0000 UTC m=+0.095783619 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3) Oct 14 06:16:25 localhost podman[335758]: 2025-10-14 10:16:25.56658923 +0000 UTC m=+0.109612877 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true) Oct 14 06:16:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:25.570 2 INFO neutron.agent.securitygroups_rpc [None req-c5aa3340-9fd7-4904-bb68-292ec0b04a2a f13b53fbf22a4c35bd774e0276dc1885 c1b284821e574367bb6352caf7327da5 - - default default] Security group member updated ['c10bbd65-342a-46b6-95d2-96fbac5e8435']#033[00m Oct 14 06:16:25 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.614 270389 INFO neutron.agent.dhcp.agent [None req-11fbff10-36c9-44af-b8d0-b52b28fe499c - - - - - -] DHCP configuration for ports {'99126604-7df5-48ce-9ef6-b69aca42913b'} is completed#033[00m Oct 14 06:16:25 localhost podman[335759]: 2025-10-14 10:16:25.672000035 +0000 UTC m=+0.212339641 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:16:25 localhost podman[335759]: 2025-10-14 10:16:25.686142711 +0000 UTC m=+0.226482307 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:16:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:25.690 2 INFO neutron.agent.securitygroups_rpc [None req-9dacf8d0-f231-40b3-b91a-c67bf08625fb 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:25 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:16:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:16:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:16:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:16:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:16:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:16:25 localhost dnsmasq[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/addn_hosts - 0 addresses Oct 14 06:16:25 localhost dnsmasq-dhcp[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/host Oct 14 06:16:25 localhost dnsmasq-dhcp[335670]: read /var/lib/neutron/dhcp/0016c976-113d-4d60-ac56-d70da6169427/opts Oct 14 06:16:25 localhost podman[335827]: 2025-10-14 10:16:25.788031101 +0000 UTC m=+0.044222628 container kill 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:25 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:16:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:16:25 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 859d3181-d057-4653-8c28-2dfd07baaad3 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:16:25 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 859d3181-d057-4653-8c28-2dfd07baaad3 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:16:25 localhost ceph-mgr[300442]: [progress INFO root] Completed event 859d3181-d057-4653-8c28-2dfd07baaad3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 0016c976-113d-4d60-ac56-d70da6169427.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap47c7d829-3e not found in namespace qdhcp-0016c976-113d-4d60-ac56-d70da6169427. Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap47c7d829-3e not found in namespace qdhcp-0016c976-113d-4d60-ac56-d70da6169427. Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.813 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:16:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:25.815 270389 INFO neutron.agent.dhcp.agent [None req-839e85df-74c5-4c76-95e1-e7108c3067f2 - - - - - -] Synchronizing state#033[00m Oct 14 06:16:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:16:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:16:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:26.307 270389 INFO neutron.agent.dhcp.agent [None req-ee9ab6b9-467f-4f1a-aecd-aafae3037e44 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:16:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:26.308 270389 INFO neutron.agent.dhcp.agent [-] Starting network 0016c976-113d-4d60-ac56-d70da6169427 dhcp configuration#033[00m Oct 14 06:16:26 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:26.319 2 INFO neutron.agent.securitygroups_rpc [None req-c5aa3340-9fd7-4904-bb68-292ec0b04a2a f13b53fbf22a4c35bd774e0276dc1885 c1b284821e574367bb6352caf7327da5 - - default default] Security group member updated ['c10bbd65-342a-46b6-95d2-96fbac5e8435']#033[00m Oct 14 06:16:26 localhost dnsmasq[335670]: exiting on receipt of SIGTERM Oct 14 06:16:26 localhost podman[335876]: 2025-10-14 10:16:26.491796714 +0000 UTC m=+0.063168222 container kill 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:26 localhost systemd[1]: libpod-46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc.scope: Deactivated successfully. Oct 14 06:16:26 localhost podman[335888]: 2025-10-14 10:16:26.569360597 +0000 UTC m=+0.061864096 container died 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:16:26 localhost systemd[1]: tmp-crun.JexLqY.mount: Deactivated successfully. Oct 14 06:16:26 localhost podman[335888]: 2025-10-14 10:16:26.606570347 +0000 UTC m=+0.099073806 container cleanup 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:26 localhost systemd[1]: libpod-conmon-46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc.scope: Deactivated successfully. Oct 14 06:16:26 localhost podman[335890]: 2025-10-14 10:16:26.640190281 +0000 UTC m=+0.126644270 container remove 46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0016c976-113d-4d60-ac56-d70da6169427, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:26.728 270389 INFO neutron.agent.dhcp.agent [None req-5360baf0-04ed-45a5-951e-feee92eb3cfa - - - - - -] Finished network 0016c976-113d-4d60-ac56-d70da6169427 dhcp configuration#033[00m Oct 14 06:16:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:26.729 270389 INFO neutron.agent.dhcp.agent [None req-ee9ab6b9-467f-4f1a-aecd-aafae3037e44 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:16:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:16:26 localhost nova_compute[295778]: 2025-10-14 10:16:26.923 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:26 localhost nova_compute[295778]: 2025-10-14 10:16:26.924 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:16:26 localhost dnsmasq[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:26 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:26 localhost dnsmasq-dhcp[335470]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:26 localhost podman[335936]: 2025-10-14 10:16:26.978266446 +0000 UTC m=+0.059440862 container kill 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:16:27 localhost systemd[1]: var-lib-containers-storage-overlay-44192a9455237af8abf0b3d60db632f3a342670a4c61b04b215c6c92d221949f-merged.mount: Deactivated successfully. Oct 14 06:16:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46d31909eaddbebf821cb57d7cca5d3639246089b08b91f72ddde26ed66ca3cc-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:27 localhost systemd[1]: run-netns-qdhcp\x2d0016c976\x2d113d\x2d4d60\x2dac56\x2dd70da6169427.mount: Deactivated successfully. Oct 14 06:16:27 localhost nova_compute[295778]: 2025-10-14 10:16:27.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:27 localhost podman[335975]: 2025-10-14 10:16:27.67157272 +0000 UTC m=+0.059540385 container kill 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:16:27 localhost dnsmasq[335470]: exiting on receipt of SIGTERM Oct 14 06:16:27 localhost systemd[1]: libpod-5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a.scope: Deactivated successfully. Oct 14 06:16:27 localhost podman[335991]: 2025-10-14 10:16:27.746913494 +0000 UTC m=+0.056227007 container died 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:27 localhost systemd[1]: tmp-crun.ZeznOF.mount: Deactivated successfully. Oct 14 06:16:27 localhost podman[335991]: 2025-10-14 10:16:27.784182106 +0000 UTC m=+0.093495579 container cleanup 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:16:27 localhost systemd[1]: libpod-conmon-5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a.scope: Deactivated successfully. Oct 14 06:16:27 localhost podman[335990]: 2025-10-14 10:16:27.822940617 +0000 UTC m=+0.130060011 container remove 5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:16:27 localhost nova_compute[295778]: 2025-10-14 10:16:27.919 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:28 localhost nova_compute[295778]: 2025-10-14 10:16:28.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:28 localhost systemd[1]: var-lib-containers-storage-overlay-099899396c646ea9e3c1553d94f402f3838d6b3ff939a588894db8516daf0ad9-merged.mount: Deactivated successfully. Oct 14 06:16:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b5ba8216d19b43d70fef2ba3da21ace56ec0aecfd8006e6ee3230be02d95a2a-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:28 localhost podman[336067]: Oct 14 06:16:28 localhost podman[336067]: 2025-10-14 10:16:28.663441217 +0000 UTC m=+0.091967277 container create 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:16:28 localhost systemd[1]: Started libpod-conmon-0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d.scope. Oct 14 06:16:28 localhost podman[336067]: 2025-10-14 10:16:28.618342527 +0000 UTC m=+0.046868597 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:28 localhost systemd[1]: Started libcrun container. Oct 14 06:16:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fe20d6f623462e71ecf91739b22e54f00a50f4717730d293bc5a11cce6ef1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:28 localhost podman[336067]: 2025-10-14 10:16:28.752058235 +0000 UTC m=+0.180584305 container init 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:16:28 localhost podman[336067]: 2025-10-14 10:16:28.760432008 +0000 UTC m=+0.188958078 container start 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:16:28 localhost dnsmasq[336086]: started, version 2.85 cachesize 150 Oct 14 06:16:28 localhost dnsmasq[336086]: DNS service limited to local subnets Oct 14 06:16:28 localhost dnsmasq[336086]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:28 localhost dnsmasq[336086]: warning: no upstream servers configured Oct 14 06:16:28 localhost dnsmasq-dhcp[336086]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:16:28 localhost dnsmasq[336086]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:28 localhost dnsmasq-dhcp[336086]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:28 localhost dnsmasq-dhcp[336086]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:28 localhost nova_compute[295778]: 2025-10-14 10:16:28.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:28 localhost nova_compute[295778]: 2025-10-14 10:16:28.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:28 localhost nova_compute[295778]: 2025-10-14 10:16:28.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:16:28 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:28.949 270389 INFO neutron.agent.dhcp.agent [None req-afc0a73b-7363-41ca-a93c-e90e475a4db8 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a1df01fc-d199-4a1e-af67-e72b780e35b7'} is completed#033[00m Oct 14 06:16:29 localhost dnsmasq[336086]: exiting on receipt of SIGTERM Oct 14 06:16:29 localhost podman[336104]: 2025-10-14 10:16:29.078328545 +0000 UTC m=+0.062169705 container kill 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:29 localhost systemd[1]: libpod-0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d.scope: Deactivated successfully. Oct 14 06:16:29 localhost podman[336118]: 2025-10-14 10:16:29.150567927 +0000 UTC m=+0.060156401 container died 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:29 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:29.174 2 INFO neutron.agent.securitygroups_rpc [None req-c4c00a8c-e700-4f72-9a8b-3220edd3eb7f f13b53fbf22a4c35bd774e0276dc1885 c1b284821e574367bb6352caf7327da5 - - default default] Security group member updated ['c10bbd65-342a-46b6-95d2-96fbac5e8435']#033[00m Oct 14 06:16:29 localhost podman[336118]: 2025-10-14 10:16:29.178926651 +0000 UTC m=+0.088515085 container cleanup 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:16:29 localhost systemd[1]: libpod-conmon-0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d.scope: Deactivated successfully. Oct 14 06:16:29 localhost podman[336120]: 2025-10-14 10:16:29.231371186 +0000 UTC m=+0.131408387 container remove 0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:16:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:29 localhost ovn_controller[156286]: 2025-10-14T10:16:29Z|00290|binding|INFO|Releasing lport a1df01fc-d199-4a1e-af67-e72b780e35b7 from this chassis (sb_readonly=0) Oct 14 06:16:29 localhost ovn_controller[156286]: 2025-10-14T10:16:29Z|00291|binding|INFO|Setting lport a1df01fc-d199-4a1e-af67-e72b780e35b7 down in Southbound Oct 14 06:16:29 localhost kernel: device tapa1df01fc-d1 left promiscuous mode Oct 14 06:16:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:29.297 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fed8:a17a/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a1df01fc-d199-4a1e-af67-e72b780e35b7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:29.299 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a1df01fc-d199-4a1e-af67-e72b780e35b7 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:16:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:29.304 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:29 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:29.305 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[61a8b2d0-c65a-4b42-bad6-be57f347d377]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:29 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:16:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:16:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:16:29 localhost systemd[1]: var-lib-containers-storage-overlay-65fe20d6f623462e71ecf91739b22e54f00a50f4717730d293bc5a11cce6ef1a-merged.mount: Deactivated successfully. Oct 14 06:16:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ff2fd1164f55700535ad6590c1616900fbcdaecb3c91df0ae239868b019309d-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:29.498 270389 INFO neutron.agent.dhcp.agent [None req-3d505469-4a00-4a88-8e71-2a132742b6e4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:29 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:16:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.906 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:16:29 localhost nova_compute[295778]: 2025-10-14 10:16:29.927 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:16:30 localhost nova_compute[295778]: 2025-10-14 10:16:30.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:16:30 localhost podman[246584]: time="2025-10-14T10:16:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:16:30 localhost podman[246584]: @ - - [14/Oct/2025:10:16:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151784 "" "Go-http-client/1.1" Oct 14 06:16:30 localhost podman[246584]: @ - - [14/Oct/2025:10:16:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20783 "" "Go-http-client/1.1" Oct 14 06:16:30 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:30.766 2 INFO neutron.agent.securitygroups_rpc [None req-e7264db5-7468-4a06-8b47-30dfce2eff1c f13b53fbf22a4c35bd774e0276dc1885 c1b284821e574367bb6352caf7327da5 - - default default] Security group member updated ['c10bbd65-342a-46b6-95d2-96fbac5e8435']#033[00m Oct 14 06:16:30 localhost podman[336166]: 2025-10-14 10:16:30.781247718 +0000 UTC m=+0.062134264 container kill 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:16:30 localhost dnsmasq[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/addn_hosts - 0 addresses Oct 14 06:16:30 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/host Oct 14 06:16:30 localhost dnsmasq-dhcp[335082]: read /var/lib/neutron/dhcp/3a7c0fe5-96d6-4107-a816-0bfeb02f7211/opts Oct 14 06:16:30 localhost nova_compute[295778]: 2025-10-14 10:16:30.922 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s Oct 14 06:16:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:16:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3188014417' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:16:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:16:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3188014417' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:16:31 localhost nova_compute[295778]: 2025-10-14 10:16:31.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:32 localhost nova_compute[295778]: 2025-10-14 10:16:32.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:32 localhost kernel: device tap97cf526a-ae left promiscuous mode Oct 14 06:16:32 localhost ovn_controller[156286]: 2025-10-14T10:16:32Z|00292|binding|INFO|Releasing lport 97cf526a-ae98-4bc2-bd24-f3511b475392 from this chassis (sb_readonly=0) Oct 14 06:16:32 localhost ovn_controller[156286]: 2025-10-14T10:16:32Z|00293|binding|INFO|Setting lport 97cf526a-ae98-4bc2-bd24-f3511b475392 down in Southbound Oct 14 06:16:32 localhost nova_compute[295778]: 2025-10-14 10:16:32.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:32 localhost nova_compute[295778]: 2025-10-14 10:16:32.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:32.036 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-3a7c0fe5-96d6-4107-a816-0bfeb02f7211', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a7c0fe5-96d6-4107-a816-0bfeb02f7211', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ede9100f-e6e2-42fd-9f07-ec4e0cf25a0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=97cf526a-ae98-4bc2-bd24-f3511b475392) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:32.038 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 97cf526a-ae98-4bc2-bd24-f3511b475392 in datapath 3a7c0fe5-96d6-4107-a816-0bfeb02f7211 unbound from our chassis#033[00m Oct 14 06:16:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:32.043 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a7c0fe5-96d6-4107-a816-0bfeb02f7211, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:32.044 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7f39bc-6a48-43ce-b98a-671e1ddbd8c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:33 localhost nova_compute[295778]: 2025-10-14 10:16:33.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s Oct 14 06:16:33 localhost openstack_network_exporter[248748]: ERROR 10:16:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:16:33 localhost openstack_network_exporter[248748]: Oct 14 06:16:33 localhost openstack_network_exporter[248748]: ERROR 10:16:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:16:33 localhost openstack_network_exporter[248748]: ERROR 10:16:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:16:33 localhost openstack_network_exporter[248748]: ERROR 10:16:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:16:33 localhost openstack_network_exporter[248748]: ERROR 10:16:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:16:33 localhost openstack_network_exporter[248748]: Oct 14 06:16:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:16:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:16:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:16:33 localhost podman[336190]: 2025-10-14 10:16:33.545622511 +0000 UTC m=+0.083233145 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 06:16:33 localhost podman[336190]: 2025-10-14 10:16:33.55726022 +0000 UTC m=+0.094870854 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm) Oct 14 06:16:33 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:16:33 localhost podman[336191]: 2025-10-14 10:16:33.613540749 +0000 UTC m=+0.146592032 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:33 localhost podman[336192]: 2025-10-14 10:16:33.666746053 +0000 UTC m=+0.192457221 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:16:33 localhost podman[336191]: 2025-10-14 10:16:33.686309854 +0000 UTC m=+0.219361147 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:16:33 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:16:33 localhost podman[336192]: 2025-10-14 10:16:33.706273326 +0000 UTC m=+0.231984514 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:16:33 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:16:33 localhost nova_compute[295778]: 2025-10-14 10:16:33.899 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:34 localhost nova_compute[295778]: 2025-10-14 10:16:34.454 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 935 B/s wr, 21 op/s Oct 14 06:16:35 localhost nova_compute[295778]: 2025-10-14 10:16:35.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:36 localhost podman[336274]: 2025-10-14 10:16:36.30233757 +0000 UTC m=+0.063474779 container kill bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:16:36 localhost dnsmasq[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/addn_hosts - 0 addresses Oct 14 06:16:36 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/host Oct 14 06:16:36 localhost dnsmasq-dhcp[334994]: read /var/lib/neutron/dhcp/b3df6336-119f-4ceb-8286-b5fbbf09920b/opts Oct 14 06:16:36 localhost nova_compute[295778]: 2025-10-14 10:16:36.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:16:37 localhost nova_compute[295778]: 2025-10-14 10:16:37.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:37 localhost ovn_controller[156286]: 2025-10-14T10:16:37Z|00294|binding|INFO|Releasing lport 23e1c194-7307-4419-b327-510181e0520f from this chassis (sb_readonly=0) Oct 14 06:16:37 localhost kernel: device tap23e1c194-73 left promiscuous mode Oct 14 06:16:37 localhost ovn_controller[156286]: 2025-10-14T10:16:37Z|00295|binding|INFO|Setting lport 23e1c194-7307-4419-b327-510181e0520f down in Southbound Oct 14 06:16:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:37.200 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-b3df6336-119f-4ceb-8286-b5fbbf09920b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b3df6336-119f-4ceb-8286-b5fbbf09920b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf1be3a6a454996a4414fad306906f1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e242c490-6d6b-4f00-b5f3-0df7926f9f2c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=23e1c194-7307-4419-b327-510181e0520f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:37.202 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 23e1c194-7307-4419-b327-510181e0520f in datapath b3df6336-119f-4ceb-8286-b5fbbf09920b unbound from our chassis#033[00m Oct 14 06:16:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:37.207 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b3df6336-119f-4ceb-8286-b5fbbf09920b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:37.209 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd8dd1e-9090-4f4f-8f7c-b9c040473154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:37 localhost nova_compute[295778]: 2025-10-14 10:16:37.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 935 B/s wr, 21 op/s Oct 14 06:16:38 localhost nova_compute[295778]: 2025-10-14 10:16:38.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:38 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:38.848 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8::f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 10.100.0.2 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:38 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:38.850 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:16:38 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:38.854 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:38 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:38.856 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9aa234-04cd-462b-afaf-25c1b00feb26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:16:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:16:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 935 B/s wr, 21 op/s Oct 14 06:16:39 localhost dnsmasq[334994]: exiting on receipt of SIGTERM Oct 14 06:16:39 localhost podman[336315]: 2025-10-14 10:16:39.590347363 +0000 UTC m=+0.055899518 container kill bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:16:39 localhost systemd[1]: libpod-bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7.scope: Deactivated successfully. Oct 14 06:16:39 localhost podman[336329]: 2025-10-14 10:16:39.661975329 +0000 UTC m=+0.054347837 container died bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:39 localhost podman[336329]: 2025-10-14 10:16:39.692597504 +0000 UTC m=+0.084969972 container cleanup bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:16:39 localhost systemd[1]: libpod-conmon-bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7.scope: Deactivated successfully. Oct 14 06:16:39 localhost podman[336330]: 2025-10-14 10:16:39.740279833 +0000 UTC m=+0.128456609 container remove bdde31f7d1dbbe779b7ca9d8a2371ba2749c5abf8f9ee417e4584e32e36952e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b3df6336-119f-4ceb-8286-b5fbbf09920b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:39.832 270389 INFO neutron.agent.dhcp.agent [None req-7a534f13-d169-42af-9da1-1d747f581d60 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:39.832 270389 INFO neutron.agent.dhcp.agent [None req-7a534f13-d169-42af-9da1-1d747f581d60 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost systemd[1]: var-lib-containers-storage-overlay-cd4ca8813dca59c5ee428e4878f3d6ba76beeb75031425e56d311908691b2eed-merged.mount: Deactivated successfully. Oct 14 06:16:40 localhost systemd[1]: run-netns-qdhcp\x2db3df6336\x2d119f\x2d4ceb\x2d8286\x2db5fbbf09920b.mount: Deactivated successfully. Oct 14 06:16:40 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:40.768 270389 INFO neutron.agent.linux.ip_lib [None req-bef6e1c6-5587-4ea3-b103-c474f1fe5523 - - - - - -] Device tapa74feea7-5e cannot be used as it has no MAC address#033[00m Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost kernel: device tapa74feea7-5e entered promiscuous mode Oct 14 06:16:40 localhost NetworkManager[5972]: [1760437000.8059] manager: (tapa74feea7-5e): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Oct 14 06:16:40 localhost ovn_controller[156286]: 2025-10-14T10:16:40Z|00296|binding|INFO|Claiming lport a74feea7-5eaf-4b1f-8434-21a019b3b011 for this chassis. Oct 14 06:16:40 localhost ovn_controller[156286]: 2025-10-14T10:16:40Z|00297|binding|INFO|a74feea7-5eaf-4b1f-8434-21a019b3b011: Claiming unknown Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost systemd-udevd[336369]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:40.816 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe61:9391/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a74feea7-5eaf-4b1f-8434-21a019b3b011) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:40.818 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a74feea7-5eaf-4b1f-8434-21a019b3b011 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:16:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:40.822 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 108d74c2-6e92-4e30-b646-dabed1ce259e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:40.822 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:40 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:40.823 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[d17f4f7e-91fc-4b1c-96f4-10058de10211]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost ovn_controller[156286]: 2025-10-14T10:16:40Z|00298|binding|INFO|Setting lport a74feea7-5eaf-4b1f-8434-21a019b3b011 ovn-installed in OVS Oct 14 06:16:40 localhost ovn_controller[156286]: 2025-10-14T10:16:40Z|00299|binding|INFO|Setting lport a74feea7-5eaf-4b1f-8434-21a019b3b011 up in Southbound Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost journal[236030]: ethtool ioctl error on tapa74feea7-5e: No such device Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:40 localhost nova_compute[295778]: 2025-10-14 10:16:40.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:41.018 2 INFO neutron.agent.securitygroups_rpc [None req-0a5df930-2184-4323-aca9-39b249cdad65 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:41.035 270389 INFO neutron.agent.linux.ip_lib [None req-ce299066-2ff9-4474-a3cf-88a66be379cf - - - - - -] Device tap7a965d3e-2b cannot be used as it has no MAC address#033[00m Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost kernel: device tap7a965d3e-2b entered promiscuous mode Oct 14 06:16:41 localhost ovn_controller[156286]: 2025-10-14T10:16:41Z|00300|binding|INFO|Claiming lport 7a965d3e-2bb6-4ee4-8505-34291527e2fc for this chassis. Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost ovn_controller[156286]: 2025-10-14T10:16:41Z|00301|binding|INFO|7a965d3e-2bb6-4ee4-8505-34291527e2fc: Claiming unknown Oct 14 06:16:41 localhost NetworkManager[5972]: [1760437001.0691] manager: (tap7a965d3e-2b): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Oct 14 06:16:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:41.082 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-25ab9eb4-f154-40b8-8c63-16508e739e60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25ab9eb4-f154-40b8-8c63-16508e739e60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b284821e574367bb6352caf7327da5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95de817e-5462-4916-be68-154bc371b9bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a965d3e-2bb6-4ee4-8505-34291527e2fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:41.084 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 7a965d3e-2bb6-4ee4-8505-34291527e2fc in datapath 25ab9eb4-f154-40b8-8c63-16508e739e60 bound to our chassis#033[00m Oct 14 06:16:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:41.086 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 25ab9eb4-f154-40b8-8c63-16508e739e60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:41.088 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2bf27c-5438-4e6c-8848-3dbdacb06d86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:41 localhost ovn_controller[156286]: 2025-10-14T10:16:41Z|00302|binding|INFO|Setting lport 7a965d3e-2bb6-4ee4-8505-34291527e2fc ovn-installed in OVS Oct 14 06:16:41 localhost ovn_controller[156286]: 2025-10-14T10:16:41Z|00303|binding|INFO|Setting lport 7a965d3e-2bb6-4ee4-8505-34291527e2fc up in Southbound Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost nova_compute[295778]: 2025-10-14 10:16:41.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Oct 14 06:16:41 localhost podman[336480]: Oct 14 06:16:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:41.949 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:41Z, description=, device_id=e707dd68-ed65-47c1-aba8-97f5082fcbee, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=22c2cf76-da39-42b0-9dcf-320a67d0bff5, ip_allocation=immediate, mac_address=fa:16:3e:09:e4:dc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:03Z, description=, dns_domain=, id=2579b986-1ecd-41e1-9c29-23fe56d2546f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-714804357, port_security_enabled=True, project_id=350a918b3c8b45c8b7f0665a734b2d1c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25402, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1972, status=ACTIVE, subnets=['27f0f1a5-29d5-489d-94c6-ba608383b96b'], tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:05Z, vlan_transparent=None, network_id=2579b986-1ecd-41e1-9c29-23fe56d2546f, port_security_enabled=False, project_id=350a918b3c8b45c8b7f0665a734b2d1c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2137, status=DOWN, tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:41Z on network 2579b986-1ecd-41e1-9c29-23fe56d2546f#033[00m Oct 14 06:16:41 localhost podman[336480]: 2025-10-14 10:16:41.960408236 +0000 UTC m=+0.092718588 container create 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:16:42 localhost podman[336480]: 2025-10-14 10:16:41.917047153 +0000 UTC m=+0.049357525 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:42 localhost systemd[1]: Started libpod-conmon-877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d.scope. Oct 14 06:16:42 localhost systemd[1]: Started libcrun container. Oct 14 06:16:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f46606db0caf78b48bf618aa8a58451c3f7ba8cab359f4c1736c7e358205fe5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:42 localhost podman[336480]: 2025-10-14 10:16:42.104087678 +0000 UTC m=+0.236398020 container init 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:16:42 localhost podman[336480]: 2025-10-14 10:16:42.121373028 +0000 UTC m=+0.253683380 container start 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:16:42 localhost dnsmasq[336526]: started, version 2.85 cachesize 150 Oct 14 06:16:42 localhost dnsmasq[336526]: DNS service limited to local subnets Oct 14 06:16:42 localhost dnsmasq[336526]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:42 localhost dnsmasq[336526]: warning: no upstream servers configured Oct 14 06:16:42 localhost dnsmasq[336526]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.186 270389 INFO neutron.agent.dhcp.agent [None req-bef6e1c6-5587-4ea3-b103-c474f1fe5523 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:40Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ea38b764-7953-4795-b496-35918f4d68bb, ip_allocation=immediate, mac_address=fa:16:3e:43:05:d4, name=tempest-NetworksTestDHCPv6-103356957, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=38, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['7fb08dba-845f-4378-934a-b7089ad421b1'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:35Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2136, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:40Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:42 localhost dnsmasq[336526]: exiting on receipt of SIGTERM Oct 14 06:16:42 localhost systemd[1]: libpod-877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d.scope: Deactivated successfully. Oct 14 06:16:42 localhost podman[336537]: 2025-10-14 10:16:42.24624413 +0000 UTC m=+0.082684040 container died 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:16:42 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:42.278 2 INFO neutron.agent.securitygroups_rpc [None req-5c76f8fb-dc30-4efd-ac5a-17d64524fe3e 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:42 localhost podman[336537]: 2025-10-14 10:16:42.281083237 +0000 UTC m=+0.117523117 container cleanup 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.287 270389 INFO neutron.agent.dhcp.agent [None req-2c30395f-a1a6-48b8-a821-b6f4913536d4 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:16:42 localhost dnsmasq[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/addn_hosts - 1 addresses Oct 14 06:16:42 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/host Oct 14 06:16:42 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/opts Oct 14 06:16:42 localhost podman[336544]: 2025-10-14 10:16:42.3116544 +0000 UTC m=+0.122419337 container kill b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:42 localhost podman[336567]: 2025-10-14 10:16:42.406417791 +0000 UTC m=+0.162738591 container cleanup 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:16:42 localhost systemd[1]: libpod-conmon-877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d.scope: Deactivated successfully. Oct 14 06:16:42 localhost podman[336590]: 2025-10-14 10:16:42.434642402 +0000 UTC m=+0.135613679 container remove 877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.486 270389 ERROR neutron.agent.linux.utils [None req-bef6e1c6-5587-4ea3-b103-c474f1fe5523 - - - - - -] Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 336526]; Stdin: ; Stdout: Tue Oct 14 10:16:42 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/336526/cgroup' for reading: No such file or directory Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: Error: no names or ids specified Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: Error: you must provide at least one name or id Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: #033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent [None req-bef6e1c6-5587-4ea3-b103-c474f1fe5523 - - - - - -] Unable to reload_allocations dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 336526]; Stdin: ; Stdout: Tue Oct 14 10:16:42 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/336526/cgroup' for reading: No such file or directory Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: Error: no names or ids specified Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: Error: you must provide at least one name or id Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 671, in reload_allocations Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent self._spawn_or_reload_process(reload_with_HUP=True) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 603, in _spawn_or_reload_process Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent pm.enable(reload_cfg=reload_with_HUP, ensure_active=True) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 108, in enable Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent self.reload_cfg() Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 117, in reload_cfg Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent self.disable('HUP', delete_pid_file=False) Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 132, in disable Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent utils.execute(cmd, addl_env=self.cmd_addl_env, Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py", line 156, in execute Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent raise exceptions.ProcessExecutionError(msg, Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 336526]; Stdin: ; Stdout: Tue Oct 14 10:16:42 AM UTC 2025 Sending signal 'HUP' to () Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent ; Stderr: awk: cmd. line:1: fatal: cannot open file `/proc/336526/cgroup' for reading: No such file or directory Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent Error: no names or ids specified Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent Error: you must provide at least one name or id Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.489 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.521 270389 INFO neutron.agent.dhcp.agent [None req-7909e978-6ae8-4035-b73d-387886f13708 - - - - - -] DHCP configuration for ports {'ea38b764-7953-4795-b496-35918f4d68bb'} is completed#033[00m Oct 14 06:16:42 localhost podman[336619]: Oct 14 06:16:42 localhost podman[336619]: 2025-10-14 10:16:42.55449609 +0000 UTC m=+0.099783125 container create 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:16:42 localhost systemd[1]: Started libpod-conmon-70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1.scope. Oct 14 06:16:42 localhost systemd[1]: Started libcrun container. Oct 14 06:16:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26333acacb435feac4b6f072468d7227c8cf8fc8733a4bacb1dfc3ab112fbeb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:42 localhost podman[336619]: 2025-10-14 10:16:42.509860243 +0000 UTC m=+0.055147328 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:42 localhost podman[336619]: 2025-10-14 10:16:42.610323406 +0000 UTC m=+0.155610441 container init 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:42 localhost podman[336619]: 2025-10-14 10:16:42.620342323 +0000 UTC m=+0.165629378 container start 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 06:16:42 localhost dnsmasq[336643]: started, version 2.85 cachesize 150 Oct 14 06:16:42 localhost dnsmasq[336643]: DNS service limited to local subnets Oct 14 06:16:42 localhost dnsmasq[336643]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:42 localhost dnsmasq[336643]: warning: no upstream servers configured Oct 14 06:16:42 localhost dnsmasq-dhcp[336643]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:16:42 localhost dnsmasq[336643]: read /var/lib/neutron/dhcp/25ab9eb4-f154-40b8-8c63-16508e739e60/addn_hosts - 0 addresses Oct 14 06:16:42 localhost dnsmasq-dhcp[336643]: read /var/lib/neutron/dhcp/25ab9eb4-f154-40b8-8c63-16508e739e60/host Oct 14 06:16:42 localhost dnsmasq-dhcp[336643]: read /var/lib/neutron/dhcp/25ab9eb4-f154-40b8-8c63-16508e739e60/opts Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.654 270389 INFO neutron.agent.dhcp.agent [None req-cebf607d-4ffc-44ac-b9ad-58f61d141aa5 - - - - - -] DHCP configuration for ports {'22c2cf76-da39-42b0-9dcf-320a67d0bff5'} is completed#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.680 270389 INFO neutron.agent.dhcp.agent [None req-ee9ab6b9-467f-4f1a-aecd-aafae3037e44 - - - - - -] Synchronizing state#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.810 270389 INFO neutron.agent.dhcp.agent [None req-26584a87-1a70-4668-8844-33ae8d2f7722 - - - - - -] DHCP configuration for ports {'b327ae0d-6f3f-4a94-b79a-1f57c2238bac'} is completed#033[00m Oct 14 06:16:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:42.888 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0a48ab83-6537-450b-a386-f9efe0e26a2a with type ""#033[00m Oct 14 06:16:42 localhost ovn_controller[156286]: 2025-10-14T10:16:42Z|00304|binding|INFO|Removing iface tap7a965d3e-2b ovn-installed in OVS Oct 14 06:16:42 localhost ovn_controller[156286]: 2025-10-14T10:16:42Z|00305|binding|INFO|Removing lport 7a965d3e-2bb6-4ee4-8505-34291527e2fc ovn-installed in OVS Oct 14 06:16:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:42.890 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-25ab9eb4-f154-40b8-8c63-16508e739e60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25ab9eb4-f154-40b8-8c63-16508e739e60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b284821e574367bb6352caf7327da5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95de817e-5462-4916-be68-154bc371b9bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a965d3e-2bb6-4ee4-8505-34291527e2fc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:42 localhost nova_compute[295778]: 2025-10-14 10:16:42.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:42.892 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 7a965d3e-2bb6-4ee4-8505-34291527e2fc in datapath 25ab9eb4-f154-40b8-8c63-16508e739e60 unbound from our chassis#033[00m Oct 14 06:16:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:42.894 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 25ab9eb4-f154-40b8-8c63-16508e739e60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:16:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:42.896 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[48a4a5d5-8566-47f9-9d01-89132a5f043b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.896 270389 INFO neutron.agent.dhcp.agent [None req-288f7abf-64c4-497e-9f96-6f462a2a409b - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.897 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.898 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:16:42 localhost nova_compute[295778]: 2025-10-14 10:16:42.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.899 270389 INFO neutron.agent.dhcp.agent [None req-288f7abf-64c4-497e-9f96-6f462a2a409b - - - - - -] Synchronizing state complete#033[00m Oct 14 06:16:42 localhost systemd[1]: tmp-crun.eqdC99.mount: Deactivated successfully. Oct 14 06:16:42 localhost systemd[1]: var-lib-containers-storage-overlay-6f46606db0caf78b48bf618aa8a58451c3f7ba8cab359f4c1736c7e358205fe5-merged.mount: Deactivated successfully. Oct 14 06:16:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-877d8acbf0cd42a88274d6cbfd0567824186396c93671e74ef953c67850f366d-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:42 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:42.977 270389 INFO neutron.agent.dhcp.agent [None req-34e041de-236d-4e63-bffc-4e36c99b5dfe - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'ea38b764-7953-4795-b496-35918f4d68bb'} is completed#033[00m Oct 14 06:16:43 localhost systemd[1]: tmp-crun.CxqEI0.mount: Deactivated successfully. Oct 14 06:16:43 localhost dnsmasq[336643]: exiting on receipt of SIGTERM Oct 14 06:16:43 localhost podman[336664]: 2025-10-14 10:16:43.191311652 +0000 UTC m=+0.085247479 container kill 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:43 localhost systemd[1]: libpod-70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1.scope: Deactivated successfully. Oct 14 06:16:43 localhost nova_compute[295778]: 2025-10-14 10:16:43.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:43 localhost podman[336690]: 2025-10-14 10:16:43.272290766 +0000 UTC m=+0.063328385 container died 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:16:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s Oct 14 06:16:43 localhost podman[336690]: 2025-10-14 10:16:43.304155364 +0000 UTC m=+0.095192943 container cleanup 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:43 localhost systemd[1]: libpod-conmon-70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1.scope: Deactivated successfully. Oct 14 06:16:43 localhost podman[336692]: 2025-10-14 10:16:43.353376523 +0000 UTC m=+0.137395746 container remove 70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25ab9eb4-f154-40b8-8c63-16508e739e60, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:16:43 localhost kernel: device tap7a965d3e-2b left promiscuous mode Oct 14 06:16:43 localhost nova_compute[295778]: 2025-10-14 10:16:43.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:43 localhost nova_compute[295778]: 2025-10-14 10:16:43.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:43 localhost podman[336720]: Oct 14 06:16:43 localhost podman[336720]: 2025-10-14 10:16:43.396280345 +0000 UTC m=+0.117096136 container create 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:16:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:43.413 270389 INFO neutron.agent.dhcp.agent [None req-36a7b25d-0d41-4a66-8bac-364f146fc31c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:43 localhost podman[336720]: 2025-10-14 10:16:43.325652466 +0000 UTC m=+0.046468307 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:43 localhost systemd[1]: Started libpod-conmon-2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09.scope. Oct 14 06:16:43 localhost systemd[1]: Started libcrun container. Oct 14 06:16:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a3d46d7f4e07e2c1f871bf61446cf8e4f8723c9fa95d3230c28b9591067e30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:43 localhost podman[336720]: 2025-10-14 10:16:43.480997239 +0000 UTC m=+0.201813030 container init 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:43 localhost podman[336720]: 2025-10-14 10:16:43.493285855 +0000 UTC m=+0.214101646 container start 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:16:43 localhost dnsmasq[336743]: started, version 2.85 cachesize 150 Oct 14 06:16:43 localhost dnsmasq[336743]: DNS service limited to local subnets Oct 14 06:16:43 localhost dnsmasq[336743]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:43 localhost dnsmasq[336743]: warning: no upstream servers configured Oct 14 06:16:43 localhost dnsmasq[336743]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:43.566 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:43 localhost systemd[1]: var-lib-containers-storage-overlay-26333acacb435feac4b6f072468d7227c8cf8fc8733a4bacb1dfc3ab112fbeb4-merged.mount: Deactivated successfully. Oct 14 06:16:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70176d2817392752e64503eb7a7f5ea8948ad4c48839d79b3ae3beeb73744bc1-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:43 localhost systemd[1]: run-netns-qdhcp\x2d25ab9eb4\x2df154\x2d40b8\x2d8c63\x2d16508e739e60.mount: Deactivated successfully. Oct 14 06:16:44 localhost dnsmasq[336743]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:44 localhost podman[336769]: 2025-10-14 10:16:44.122865745 +0000 UTC m=+0.067953569 container kill 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:44.245 2 INFO neutron.agent.securitygroups_rpc [None req-14a4c602-5a85-4c5c-8a40-437ac242b70f 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:44.375 270389 INFO neutron.agent.dhcp.agent [None req-b4529ef3-0be7-460a-8466-0be0478df1c2 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a74feea7-5eaf-4b1f-8434-21a019b3b011'} is completed#033[00m Oct 14 06:16:44 localhost dnsmasq[336743]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:16:44 localhost systemd[1]: tmp-crun.fQE18B.mount: Deactivated successfully. Oct 14 06:16:44 localhost podman[336810]: 2025-10-14 10:16:44.609919052 +0000 UTC m=+0.072905630 container kill 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:16:44 localhost nova_compute[295778]: 2025-10-14 10:16:44.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:44.722 2 INFO neutron.agent.securitygroups_rpc [None req-5a983a26-4b0e-471b-bc37-40bd24908aed 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:44.831 270389 INFO neutron.agent.dhcp.agent [None req-e2fc1f3b-ea7a-4f0d-ba28-2da05b594dc8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0eb0b360-9c09-43b9-846c-3a5c334eb137, ip_allocation=immediate, mac_address=fa:16:3e:cd:4b:bb, name=tempest-NetworksTestDHCPv6-494846258, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=40, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['af231b76-fd17-4a7d-bad2-5e85727243bd'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:42Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2149, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:44Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:44.866 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:41Z, description=, device_id=e707dd68-ed65-47c1-aba8-97f5082fcbee, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=22c2cf76-da39-42b0-9dcf-320a67d0bff5, ip_allocation=immediate, mac_address=fa:16:3e:09:e4:dc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:16:03Z, description=, dns_domain=, id=2579b986-1ecd-41e1-9c29-23fe56d2546f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-714804357, port_security_enabled=True, project_id=350a918b3c8b45c8b7f0665a734b2d1c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25402, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1972, status=ACTIVE, subnets=['27f0f1a5-29d5-489d-94c6-ba608383b96b'], tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:05Z, vlan_transparent=None, network_id=2579b986-1ecd-41e1-9c29-23fe56d2546f, port_security_enabled=False, project_id=350a918b3c8b45c8b7f0665a734b2d1c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2137, status=DOWN, tags=[], tenant_id=350a918b3c8b45c8b7f0665a734b2d1c, updated_at=2025-10-14T10:16:41Z on network 2579b986-1ecd-41e1-9c29-23fe56d2546f#033[00m Oct 14 06:16:45 localhost dnsmasq[336743]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:16:45 localhost systemd[1]: tmp-crun.ldBW2L.mount: Deactivated successfully. Oct 14 06:16:45 localhost podman[336849]: 2025-10-14 10:16:45.040915209 +0000 UTC m=+0.069232193 container kill 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:16:45 localhost podman[336873]: 2025-10-14 10:16:45.153241257 +0000 UTC m=+0.087134909 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:16:45 localhost podman[336888]: 2025-10-14 10:16:45.185412412 +0000 UTC m=+0.075623743 container kill b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:16:45 localhost dnsmasq[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/addn_hosts - 1 addresses Oct 14 06:16:45 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/host Oct 14 06:16:45 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/opts Oct 14 06:16:45 localhost podman[336873]: 2025-10-14 10:16:45.226128186 +0000 UTC m=+0.160021788 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:45 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:16:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 34 op/s Oct 14 06:16:45 localhost nova_compute[295778]: 2025-10-14 10:16:45.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:45 localhost dnsmasq[336743]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:45 localhost podman[336940]: 2025-10-14 10:16:45.458337274 +0000 UTC m=+0.068187196 container kill 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:16:45 localhost dnsmasq[336743]: exiting on receipt of SIGTERM Oct 14 06:16:45 localhost podman[336977]: 2025-10-14 10:16:45.868806773 +0000 UTC m=+0.070235839 container kill 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:45 localhost systemd[1]: libpod-2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09.scope: Deactivated successfully. Oct 14 06:16:45 localhost podman[336991]: 2025-10-14 10:16:45.946772307 +0000 UTC m=+0.063593243 container died 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:16:45 localhost podman[336991]: 2025-10-14 10:16:45.976315364 +0000 UTC m=+0.093136270 container cleanup 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:16:45 localhost systemd[1]: libpod-conmon-2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09.scope: Deactivated successfully. Oct 14 06:16:46 localhost systemd[1]: var-lib-containers-storage-overlay-80a3d46d7f4e07e2c1f871bf61446cf8e4f8723c9fa95d3230c28b9591067e30-merged.mount: Deactivated successfully. Oct 14 06:16:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:46 localhost podman[336993]: 2025-10-14 10:16:46.042931455 +0000 UTC m=+0.146676813 container remove 2dee7c28994409b259219ce829e36549ba7ec0603a0c5c457d56c903a986ea09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:16:46 localhost nova_compute[295778]: 2025-10-14 10:16:46.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:46 localhost kernel: device tapa74feea7-5e left promiscuous mode Oct 14 06:16:46 localhost ovn_controller[156286]: 2025-10-14T10:16:46Z|00306|binding|INFO|Releasing lport a74feea7-5eaf-4b1f-8434-21a019b3b011 from this chassis (sb_readonly=0) Oct 14 06:16:46 localhost ovn_controller[156286]: 2025-10-14T10:16:46Z|00307|binding|INFO|Setting lport a74feea7-5eaf-4b1f-8434-21a019b3b011 down in Southbound Oct 14 06:16:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:46.070 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe61:9391/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a74feea7-5eaf-4b1f-8434-21a019b3b011) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:46.073 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a74feea7-5eaf-4b1f-8434-21a019b3b011 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:16:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:46.077 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:46.079 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f4460566-c305-46fd-b9a5-b6b9ee4f79a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:46 localhost nova_compute[295778]: 2025-10-14 10:16:46.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:46.400 270389 INFO neutron.agent.dhcp.agent [None req-24d3ef4e-419a-4599-8eb2-b5187a98fce3 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', 'a74feea7-5eaf-4b1f-8434-21a019b3b011', '0eb0b360-9c09-43b9-846c-3a5c334eb137'} is completed#033[00m Oct 14 06:16:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:46.565 270389 INFO neutron.agent.dhcp.agent [None req-b28246f7-6e82-46b9-835f-ef8931cdc047 - - - - - -] DHCP configuration for ports {'22c2cf76-da39-42b0-9dcf-320a67d0bff5', '0eb0b360-9c09-43b9-846c-3a5c334eb137'} is completed#033[00m Oct 14 06:16:46 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:16:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:46.674 270389 INFO neutron.agent.dhcp.agent [None req-f9ab97e0-f368-4b5f-866c-73f58104c070 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:47 localhost dnsmasq[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/addn_hosts - 0 addresses Oct 14 06:16:47 localhost podman[337038]: 2025-10-14 10:16:47.126151474 +0000 UTC m=+0.065446162 container kill b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:16:47 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/host Oct 14 06:16:47 localhost dnsmasq-dhcp[334516]: read /var/lib/neutron/dhcp/2579b986-1ecd-41e1-9c29-23fe56d2546f/opts Oct 14 06:16:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 14 op/s Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00308|binding|INFO|Releasing lport 751bfe6e-ed21-4412-8a34-1ddae80aa076 from this chassis (sb_readonly=0) Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00309|binding|INFO|Setting lport 751bfe6e-ed21-4412-8a34-1ddae80aa076 down in Southbound Oct 14 06:16:47 localhost kernel: device tap751bfe6e-ed left promiscuous mode Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.362 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-2579b986-1ecd-41e1-9c29-23fe56d2546f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2579b986-1ecd-41e1-9c29-23fe56d2546f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af5bf630-6933-47f0-af34-6cb52eb844c9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=751bfe6e-ed21-4412-8a34-1ddae80aa076) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.364 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 751bfe6e-ed21-4412-8a34-1ddae80aa076 in datapath 2579b986-1ecd-41e1-9c29-23fe56d2546f unbound from our chassis#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.368 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2579b986-1ecd-41e1-9c29-23fe56d2546f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.369 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6f591ee4-6921-402d-8195-f688cd514e3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:47.917 270389 INFO neutron.agent.linux.ip_lib [None req-00067804-5d78-4149-b6e7-1bb50b62a77d - - - - - -] Device tap878436ac-cd cannot be used as it has no MAC address#033[00m Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost kernel: device tap878436ac-cd entered promiscuous mode Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00310|binding|INFO|Claiming lport 878436ac-cd7b-4752-8abf-b93710fb481c for this chassis. Oct 14 06:16:47 localhost NetworkManager[5972]: [1760437007.9465] manager: (tap878436ac-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00311|binding|INFO|878436ac-cd7b-4752-8abf-b93710fb481c: Claiming unknown Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost systemd-udevd[337069]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00312|binding|INFO|Setting lport 878436ac-cd7b-4752-8abf-b93710fb481c ovn-installed in OVS Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost ovn_controller[156286]: 2025-10-14T10:16:47Z|00313|binding|INFO|Setting lport 878436ac-cd7b-4752-8abf-b93710fb481c up in Southbound Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.963 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:523d/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=878436ac-cd7b-4752-8abf-b93710fb481c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.965 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 878436ac-cd7b-4752-8abf-b93710fb481c in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.973 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port dc19d99f-a49a-4826-9f64-5f1c699ca078 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.973 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:47 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:47.975 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[184df773-e6a6-4c28-bbc0-5426f406b6ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:47 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:47 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:47 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:47 localhost nova_compute[295778]: 2025-10-14 10:16:47.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:47 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:48 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:48 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:48 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:48 localhost journal[236030]: ethtool ioctl error on tap878436ac-cd: No such device Oct 14 06:16:48 localhost nova_compute[295778]: 2025-10-14 10:16:48.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:48 localhost nova_compute[295778]: 2025-10-14 10:16:48.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:48 localhost nova_compute[295778]: 2025-10-14 10:16:48.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:48 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:48.499 2 INFO neutron.agent.securitygroups_rpc [None req-8e3d5cd7-e138-4dba-9b9e-24fb34a7ff8a 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:48 localhost podman[337140]: Oct 14 06:16:48 localhost podman[337140]: 2025-10-14 10:16:48.933888726 +0000 UTC m=+0.092321127 container create ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:16:48 localhost systemd[1]: Started libpod-conmon-ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6.scope. Oct 14 06:16:48 localhost podman[337140]: 2025-10-14 10:16:48.889638009 +0000 UTC m=+0.048070450 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:49 localhost systemd[1]: Started libcrun container. Oct 14 06:16:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5074a6ba1f8cade5d57ae1cf3334519d19e6bec9c256003d37d07a3c5ee6539/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:49 localhost podman[337140]: 2025-10-14 10:16:49.030954899 +0000 UTC m=+0.189387300 container init ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:16:49 localhost podman[337140]: 2025-10-14 10:16:49.041767796 +0000 UTC m=+0.200200197 container start ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:49 localhost dnsmasq[337158]: started, version 2.85 cachesize 150 Oct 14 06:16:49 localhost dnsmasq[337158]: DNS service limited to local subnets Oct 14 06:16:49 localhost dnsmasq[337158]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:49 localhost dnsmasq[337158]: warning: no upstream servers configured Oct 14 06:16:49 localhost dnsmasq[337158]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:49.104 270389 INFO neutron.agent.dhcp.agent [None req-00067804-5d78-4149-b6e7-1bb50b62a77d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d5fb1316-814a-4499-8251-d997fc4d3cb9, ip_allocation=immediate, mac_address=fa:16:3e:ab:12:0c, name=tempest-NetworksTestDHCPv6-467011514, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=42, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['4218289f-70a4-45cf-8571-1dc31989e966'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:45Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2152, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:48Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:49.252 270389 INFO neutron.agent.dhcp.agent [None req-456e547d-aaca-4138-bbf3-8338006bec0e - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:16:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 14 op/s Oct 14 06:16:49 localhost dnsmasq[337158]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:16:49 localhost podman[337177]: 2025-10-14 10:16:49.319118135 +0000 UTC m=+0.067913088 container kill ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 14 06:16:49 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:49.360 2 INFO neutron.agent.securitygroups_rpc [None req-fb81f250-1cad-43a0-86a6-1fe40cd0df92 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:49.579 270389 INFO neutron.agent.dhcp.agent [None req-fc604014-90c4-4282-b23c-5c364ce90968 - - - - - -] DHCP configuration for ports {'d5fb1316-814a-4499-8251-d997fc4d3cb9'} is completed#033[00m Oct 14 06:16:49 localhost dnsmasq[337158]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:49 localhost podman[337214]: 2025-10-14 10:16:49.680857168 +0000 UTC m=+0.060267444 container kill ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:16:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.983 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:16:49.983 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:16:50 localhost podman[337252]: 2025-10-14 10:16:50.189270984 +0000 UTC m=+0.074480392 container kill 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:16:50 localhost dnsmasq[335135]: exiting on receipt of SIGTERM Oct 14 06:16:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:16:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:16:50 localhost systemd[1]: tmp-crun.MsCsy4.mount: Deactivated successfully. Oct 14 06:16:50 localhost systemd[1]: libpod-0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809.scope: Deactivated successfully. Oct 14 06:16:50 localhost podman[337277]: 2025-10-14 10:16:50.283538962 +0000 UTC m=+0.064390974 container died 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:16:50 localhost podman[337277]: 2025-10-14 10:16:50.319681413 +0000 UTC m=+0.100533375 container cleanup 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:16:50 localhost systemd[1]: libpod-conmon-0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809.scope: Deactivated successfully. Oct 14 06:16:50 localhost dnsmasq[337158]: exiting on receipt of SIGTERM Oct 14 06:16:50 localhost podman[337325]: 2025-10-14 10:16:50.351022407 +0000 UTC m=+0.051076950 container kill ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:50 localhost systemd[1]: libpod-ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6.scope: Deactivated successfully. Oct 14 06:16:50 localhost podman[337273]: 2025-10-14 10:16:50.421159633 +0000 UTC m=+0.207854331 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:16:50 localhost podman[337348]: 2025-10-14 10:16:50.441802143 +0000 UTC m=+0.067648362 container died ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:16:50 localhost nova_compute[295778]: 2025-10-14 10:16:50.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:50 localhost podman[337273]: 2025-10-14 10:16:50.458131106 +0000 UTC m=+0.244825754 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:16:50 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:16:50 localhost podman[337275]: 2025-10-14 10:16:50.528323324 +0000 UTC m=+0.309838064 container remove 0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfdbdb17-6bbf-4fee-8769-34c2b86d2981, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 14 06:16:50 localhost nova_compute[295778]: 2025-10-14 10:16:50.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:50 localhost kernel: device tap6fd9907e-ef left promiscuous mode Oct 14 06:16:50 localhost ovn_controller[156286]: 2025-10-14T10:16:50Z|00314|binding|INFO|Releasing lport 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa from this chassis (sb_readonly=0) Oct 14 06:16:50 localhost ovn_controller[156286]: 2025-10-14T10:16:50Z|00315|binding|INFO|Setting lport 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa down in Southbound Oct 14 06:16:50 localhost nova_compute[295778]: 2025-10-14 10:16:50.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.567 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-dfdbdb17-6bbf-4fee-8769-34c2b86d2981', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfdbdb17-6bbf-4fee-8769-34c2b86d2981', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '350a918b3c8b45c8b7f0665a734b2d1c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c845b6cc-56ab-48b0-bff8-a12356f33c56, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.570 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 6fd9907e-ef3f-47a9-97a5-eda6b2cc31fa in datapath dfdbdb17-6bbf-4fee-8769-34c2b86d2981 unbound from our chassis#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.574 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfdbdb17-6bbf-4fee-8769-34c2b86d2981, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.575 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[25449044-715a-4b79-bca4-f1f3bd8836a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:50 localhost podman[337348]: 2025-10-14 10:16:50.593341524 +0000 UTC m=+0.219187713 container remove ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:16:50 localhost podman[337276]: 2025-10-14 10:16:50.448190792 +0000 UTC m=+0.232755393 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:16:50 localhost systemd[1]: libpod-conmon-ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6.scope: Deactivated successfully. Oct 14 06:16:50 localhost nova_compute[295778]: 2025-10-14 10:16:50.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:50 localhost ovn_controller[156286]: 2025-10-14T10:16:50Z|00316|binding|INFO|Releasing lport 878436ac-cd7b-4752-8abf-b93710fb481c from this chassis (sb_readonly=0) Oct 14 06:16:50 localhost ovn_controller[156286]: 2025-10-14T10:16:50Z|00317|binding|INFO|Setting lport 878436ac-cd7b-4752-8abf-b93710fb481c down in Southbound Oct 14 06:16:50 localhost kernel: device tap878436ac-cd left promiscuous mode Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.623 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:523d/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=878436ac-cd7b-4752-8abf-b93710fb481c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.625 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 878436ac-cd7b-4752-8abf-b93710fb481c in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.629 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:50 localhost podman[337276]: 2025-10-14 10:16:50.631091288 +0000 UTC m=+0.415655909 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:16:50 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:50.630 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[7682538a-98fe-4b73-ae64-94da01c2d11e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:50 localhost nova_compute[295778]: 2025-10-14 10:16:50.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:50 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:16:50 localhost systemd[1]: var-lib-containers-storage-overlay-d5074a6ba1f8cade5d57ae1cf3334519d19e6bec9c256003d37d07a3c5ee6539-merged.mount: Deactivated successfully. Oct 14 06:16:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba3503940f7ac89c7ce70f58e7730950d57fefdbfe157a80e03d147d0188f4f6-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:50 localhost systemd[1]: var-lib-containers-storage-overlay-9c562fd72ba2302b41bca47dd3b24d460e89191a21f3491c27d7aa55c2edb059-merged.mount: Deactivated successfully. Oct 14 06:16:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cccbbbccf0a6405850cf1e062f5442097f691905fc1ae34067a5e3d74996809-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 28 op/s Oct 14 06:16:52 localhost podman[337407]: 2025-10-14 10:16:52.073853021 +0000 UTC m=+0.067824505 container kill 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:16:52 localhost dnsmasq[335082]: exiting on receipt of SIGTERM Oct 14 06:16:52 localhost systemd[1]: libpod-49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62.scope: Deactivated successfully. Oct 14 06:16:52 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:16:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:52.152 270389 INFO neutron.agent.dhcp.agent [None req-a5f50ad3-3d69-4f68-8f60-6cfdfd9552e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:52.158 270389 INFO neutron.agent.dhcp.agent [None req-c8208440-4b75-4ef9-970e-c54bef800143 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:52 localhost systemd[1]: run-netns-qdhcp\x2ddfdbdb17\x2d6bbf\x2d4fee\x2d8769\x2d34c2b86d2981.mount: Deactivated successfully. Oct 14 06:16:52 localhost podman[337421]: 2025-10-14 10:16:52.16064603 +0000 UTC m=+0.070945758 container died 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:52 localhost podman[337421]: 2025-10-14 10:16:52.193885124 +0000 UTC m=+0.104184812 container cleanup 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:16:52 localhost systemd[1]: libpod-conmon-49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62.scope: Deactivated successfully. Oct 14 06:16:52 localhost podman[337423]: 2025-10-14 10:16:52.23884518 +0000 UTC m=+0.137843848 container remove 49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a7c0fe5-96d6-4107-a816-0bfeb02f7211, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:16:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:52.561 270389 INFO neutron.agent.dhcp.agent [None req-1d5da06d-727a-4760-9de7-05d997dd7fb1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:52.747 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:53 localhost systemd[1]: var-lib-containers-storage-overlay-4634e47a2cebf10366fe2d0be62f46efa865cb59572cb8cc262e80dfbd5502db-merged.mount: Deactivated successfully. Oct 14 06:16:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49e6fc9164dba1f135c5d7aabae6803982a7d52573e2b55970694b37947aac62-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:53 localhost systemd[1]: run-netns-qdhcp\x2d3a7c0fe5\x2d96d6\x2d4107\x2da816\x2d0bfeb02f7211.mount: Deactivated successfully. Oct 14 06:16:53 localhost nova_compute[295778]: 2025-10-14 10:16:53.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:53 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:53.266 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 21 op/s Oct 14 06:16:53 localhost nova_compute[295778]: 2025-10-14 10:16:53.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:54.314 2 INFO neutron.agent.securitygroups_rpc [None req-01a27e63-fa41-4775-b8b4-10aba7c88838 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:54 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:54.358 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:54 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:54.714 270389 INFO neutron.agent.linux.ip_lib [None req-92a8b3d6-79ce-454a-905b-3ea571a84411 - - - - - -] Device tap2d6e87c9-7a cannot be used as it has no MAC address#033[00m Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost kernel: device tap2d6e87c9-7a entered promiscuous mode Oct 14 06:16:54 localhost NetworkManager[5972]: [1760437014.7712] manager: (tap2d6e87c9-7a): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Oct 14 06:16:54 localhost ovn_controller[156286]: 2025-10-14T10:16:54Z|00318|binding|INFO|Claiming lport 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 for this chassis. Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost ovn_controller[156286]: 2025-10-14T10:16:54Z|00319|binding|INFO|2d6e87c9-7ac1-4e72-8397-4f0c81180d22: Claiming unknown Oct 14 06:16:54 localhost systemd-udevd[337459]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:54.795 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:cd38/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2d6e87c9-7ac1-4e72-8397-4f0c81180d22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:54.797 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:16:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:54.800 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6bc1dd9-fab4-4367-9be1-a4745661a3c3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:54.800 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:54.801 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[90b82c0b-52ff-48c6-afbb-52e699df28e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost ovn_controller[156286]: 2025-10-14T10:16:54Z|00320|binding|INFO|Setting lport 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 ovn-installed in OVS Oct 14 06:16:54 localhost ovn_controller[156286]: 2025-10-14T10:16:54Z|00321|binding|INFO|Setting lport 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 up in Southbound Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:54.835 2 INFO neutron.agent.securitygroups_rpc [None req-1c0d527f-ae60-410e-b28d-7969e7a7b040 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost journal[236030]: ethtool ioctl error on tap2d6e87c9-7a: No such device Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:54 localhost nova_compute[295778]: 2025-10-14 10:16:54.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 192 MiB data, 831 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 58 op/s Oct 14 06:16:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e133 do_prune osdmap full prune enabled Oct 14 06:16:55 localhost nova_compute[295778]: 2025-10-14 10:16:55.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e134 e134: 6 total, 6 up, 6 in Oct 14 06:16:55 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e134: 6 total, 6 up, 6 in Oct 14 06:16:55 localhost podman[337539]: Oct 14 06:16:55 localhost podman[337539]: 2025-10-14 10:16:55.767345632 +0000 UTC m=+0.100603077 container create 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:16:55 localhost systemd[1]: Started libpod-conmon-354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6.scope. Oct 14 06:16:55 localhost podman[337558]: 2025-10-14 10:16:55.821183214 +0000 UTC m=+0.083230715 container kill b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:16:55 localhost podman[337539]: 2025-10-14 10:16:55.720923987 +0000 UTC m=+0.054181452 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:55 localhost dnsmasq[334516]: exiting on receipt of SIGTERM Oct 14 06:16:55 localhost systemd[1]: libpod-b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e.scope: Deactivated successfully. Oct 14 06:16:55 localhost systemd[1]: Started libcrun container. Oct 14 06:16:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be82c360658a0b744e34fb184565b5dbd193274c1eccd7f44cf7799ac5e5016e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:55 localhost podman[337539]: 2025-10-14 10:16:55.862468213 +0000 UTC m=+0.195738158 container init 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:16:55 localhost podman[337539]: 2025-10-14 10:16:55.875644143 +0000 UTC m=+0.208901548 container start 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 14 06:16:55 localhost dnsmasq[337616]: started, version 2.85 cachesize 150 Oct 14 06:16:55 localhost dnsmasq[337616]: DNS service limited to local subnets Oct 14 06:16:55 localhost dnsmasq[337616]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:55 localhost dnsmasq[337616]: warning: no upstream servers configured Oct 14 06:16:55 localhost dnsmasq-dhcp[337616]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:16:55 localhost dnsmasq[337616]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:55 localhost dnsmasq-dhcp[337616]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:55 localhost dnsmasq-dhcp[337616]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:55 localhost podman[337588]: 2025-10-14 10:16:55.913365607 +0000 UTC m=+0.075717026 container died b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:16:55 localhost podman[337588]: 2025-10-14 10:16:55.940703154 +0000 UTC m=+0.103054493 container cleanup b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:16:55 localhost systemd[1]: libpod-conmon-b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e.scope: Deactivated successfully. Oct 14 06:16:55 localhost podman[337571]: 2025-10-14 10:16:55.944861154 +0000 UTC m=+0.136094272 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:16:55 localhost podman[337571]: 2025-10-14 10:16:55.954061279 +0000 UTC m=+0.145294377 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:55 localhost podman[337573]: 2025-10-14 10:16:55.91085369 +0000 UTC m=+0.097842524 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009) Oct 14 06:16:55 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:16:55 localhost podman[337573]: 2025-10-14 10:16:55.993460337 +0000 UTC m=+0.180449261 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:16:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:16:56 localhost podman[337601]: 2025-10-14 10:16:56.038093324 +0000 UTC m=+0.180108372 container remove b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2579b986-1ecd-41e1-9c29-23fe56d2546f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:16:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:56.064 270389 INFO neutron.agent.dhcp.agent [None req-8557a363-2c10-4651-9d8e-224807787cfe - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:16:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:56.070 270389 INFO neutron.agent.dhcp.agent [None req-d82824e5-631c-4e51-8308-00ab8ccd7483 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:56 localhost podman[337655]: 2025-10-14 10:16:56.219195273 +0000 UTC m=+0.060885321 container kill 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:56 localhost dnsmasq[337616]: exiting on receipt of SIGTERM Oct 14 06:16:56 localhost systemd[1]: libpod-354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6.scope: Deactivated successfully. Oct 14 06:16:56 localhost podman[337669]: 2025-10-14 10:16:56.290200391 +0000 UTC m=+0.057175752 container died 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:16:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:56.371 2 INFO neutron.agent.securitygroups_rpc [None req-c7cb125f-c945-411f-8038-bc7c7c3f8065 829aecbebfd54f24a9393e430b83d97d 5b0b6727285f4a5bbb8c9712a0e1046a - - default default] Security group rule updated ['475ead66-a9a3-40ac-9223-caee62f16474']#033[00m Oct 14 06:16:56 localhost podman[337669]: 2025-10-14 10:16:56.374220717 +0000 UTC m=+0.141196027 container cleanup 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:16:56 localhost systemd[1]: libpod-conmon-354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6.scope: Deactivated successfully. Oct 14 06:16:56 localhost podman[337671]: 2025-10-14 10:16:56.399989963 +0000 UTC m=+0.157331097 container remove 354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:16:56 localhost ovn_controller[156286]: 2025-10-14T10:16:56Z|00322|binding|INFO|Releasing lport 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 from this chassis (sb_readonly=0) Oct 14 06:16:56 localhost ovn_controller[156286]: 2025-10-14T10:16:56Z|00323|binding|INFO|Setting lport 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 down in Southbound Oct 14 06:16:56 localhost kernel: device tap2d6e87c9-7a left promiscuous mode Oct 14 06:16:56 localhost nova_compute[295778]: 2025-10-14 10:16:56.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e134 do_prune osdmap full prune enabled Oct 14 06:16:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:56.460 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:cd38/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2d6e87c9-7ac1-4e72-8397-4f0c81180d22) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:56.462 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6e87c9-7ac1-4e72-8397-4f0c81180d22 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:16:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:56.465 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:56 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:56.466 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[18c203dc-1378-4ab0-a1d5-508d8855d414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:56 localhost nova_compute[295778]: 2025-10-14 10:16:56.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e135 e135: 6 total, 6 up, 6 in Oct 14 06:16:56 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e135: 6 total, 6 up, 6 in Oct 14 06:16:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:56.764 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:56 localhost systemd[1]: tmp-crun.cokDRZ.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: var-lib-containers-storage-overlay-be82c360658a0b744e34fb184565b5dbd193274c1eccd7f44cf7799ac5e5016e-merged.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-354de733b18957e4c82401be37e76e62f002cfe8a8c14f014d6878e41d33ace6-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: var-lib-containers-storage-overlay-3a760d61ce8c98fe911d15844236c388e72f112750a46ad110e822e4e852fcbe-merged.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b142485f00f15da16df7ce7eaee2754d96e31c126574779c83ddff070d614d2e-userdata-shm.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: run-netns-qdhcp\x2d2579b986\x2d1ecd\x2d41e1\x2d9c29\x2d23fe56d2546f.mount: Deactivated successfully. Oct 14 06:16:56 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:16:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:56.847 270389 INFO neutron.agent.dhcp.agent [None req-f9c034f2-8d7a-442f-b606-0edcf86fa069 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:56 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:56.848 270389 INFO neutron.agent.dhcp.agent [None req-f9c034f2-8d7a-442f-b606-0edcf86fa069 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:16:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 192 MiB data, 831 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 2.7 MiB/s wr, 76 op/s Oct 14 06:16:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e135 do_prune osdmap full prune enabled Oct 14 06:16:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e136 e136: 6 total, 6 up, 6 in Oct 14 06:16:57 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e136: 6 total, 6 up, 6 in Oct 14 06:16:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:57.641 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:16:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:57.641 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:16:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:57.641 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:16:58 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:58.190 270389 INFO neutron.agent.linux.ip_lib [None req-fa331db5-30d9-4768-bc56-54624d3398a0 - - - - - -] Device tapd0d9188c-4f cannot be used as it has no MAC address#033[00m Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:58.289 2 INFO neutron.agent.securitygroups_rpc [None req-51870dd7-af9a-4a53-8ac4-a21928aff9e5 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:58 localhost kernel: device tapd0d9188c-4f entered promiscuous mode Oct 14 06:16:58 localhost NetworkManager[5972]: [1760437018.2952] manager: (tapd0d9188c-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Oct 14 06:16:58 localhost ovn_controller[156286]: 2025-10-14T10:16:58Z|00324|binding|INFO|Claiming lport d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 for this chassis. Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost ovn_controller[156286]: 2025-10-14T10:16:58Z|00325|binding|INFO|d0d9188c-4f5a-44bd-9e0a-daca8d51cf91: Claiming unknown Oct 14 06:16:58 localhost systemd-udevd[337708]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:16:58 localhost ovn_controller[156286]: 2025-10-14T10:16:58Z|00326|binding|INFO|Setting lport d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 ovn-installed in OVS Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost ovn_controller[156286]: 2025-10-14T10:16:58Z|00327|binding|INFO|Setting lport d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 up in Southbound Oct 14 06:16:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:58.305 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0d9188c-4f5a-44bd-9e0a-daca8d51cf91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:16:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:58.307 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:16:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:58.309 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port c8369a3a-7fbd-497d-be8f-323b89da954b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:16:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:58.309 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:16:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:16:58.310 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2c989d-9d8f-44dd-ab1c-e21f1b0d4836]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost journal[236030]: ethtool ioctl error on tapd0d9188c-4f: No such device Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost nova_compute[295778]: 2025-10-14 10:16:58.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:16:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e136 do_prune osdmap full prune enabled Oct 14 06:16:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e137 e137: 6 total, 6 up, 6 in Oct 14 06:16:58 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e137: 6 total, 6 up, 6 in Oct 14 06:16:58 localhost neutron_sriov_agent[263389]: 2025-10-14 10:16:58.778 2 INFO neutron.agent.securitygroups_rpc [None req-e2de72de-d41d-42ae-a801-e4c63c4ba793 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:16:59 localhost podman[337779]: Oct 14 06:16:59 localhost podman[337779]: 2025-10-14 10:16:59.263165684 +0000 UTC m=+0.096536570 container create b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:16:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 192 MiB data, 831 MiB used, 41 GiB / 42 GiB avail Oct 14 06:16:59 localhost systemd[1]: Started libpod-conmon-b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5.scope. Oct 14 06:16:59 localhost podman[337779]: 2025-10-14 10:16:59.217853998 +0000 UTC m=+0.051224874 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:16:59 localhost systemd[1]: tmp-crun.nOUlhS.mount: Deactivated successfully. Oct 14 06:16:59 localhost systemd[1]: Started libcrun container. Oct 14 06:16:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b6f61f5dbebb144c3a5739516977a66fb986436ee65ad78162e32ab8206a8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:16:59 localhost podman[337779]: 2025-10-14 10:16:59.352160571 +0000 UTC m=+0.185531457 container init b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:16:59 localhost podman[337779]: 2025-10-14 10:16:59.361008366 +0000 UTC m=+0.194379242 container start b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:16:59 localhost dnsmasq[337797]: started, version 2.85 cachesize 150 Oct 14 06:16:59 localhost dnsmasq[337797]: DNS service limited to local subnets Oct 14 06:16:59 localhost dnsmasq[337797]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:16:59 localhost dnsmasq[337797]: warning: no upstream servers configured Oct 14 06:16:59 localhost dnsmasq-dhcp[337797]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:16:59 localhost dnsmasq[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:16:59 localhost dnsmasq-dhcp[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:59 localhost dnsmasq-dhcp[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:59.427 270389 INFO neutron.agent.dhcp.agent [None req-fa331db5-30d9-4768-bc56-54624d3398a0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:16:58Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6ab95d0c-a5a6-4cf4-907b-2d03a74c1580, ip_allocation=immediate, mac_address=fa:16:3e:25:69:55, name=tempest-NetworksTestDHCPv6-2034178041, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=46, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['fb294f78-9bb4-4ac8-839b-9c6c63e27a46'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:56Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2196, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:16:58Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:16:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:59.587 270389 INFO neutron.agent.dhcp.agent [None req-04aa3185-9062-4fc0-bdff-574dcaefc79c - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:16:59 localhost dnsmasq[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 1 addresses Oct 14 06:16:59 localhost dnsmasq-dhcp[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:16:59 localhost dnsmasq-dhcp[337797]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:16:59 localhost podman[337814]: 2025-10-14 10:16:59.637299327 +0000 UTC m=+0.062054532 container kill b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:16:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:16:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:16:59.913 270389 INFO neutron.agent.dhcp.agent [None req-466fda3e-77eb-4220-9c89-a3f2d53d6cb7 - - - - - -] DHCP configuration for ports {'6ab95d0c-a5a6-4cf4-907b-2d03a74c1580'} is completed#033[00m Oct 14 06:17:00 localhost dnsmasq[337797]: exiting on receipt of SIGTERM Oct 14 06:17:00 localhost podman[337852]: 2025-10-14 10:17:00.039503577 +0000 UTC m=+0.062935866 container kill b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:17:00 localhost systemd[1]: libpod-b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5.scope: Deactivated successfully. Oct 14 06:17:00 localhost podman[337865]: 2025-10-14 10:17:00.11029633 +0000 UTC m=+0.058300722 container died b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:17:00 localhost podman[337865]: 2025-10-14 10:17:00.143945665 +0000 UTC m=+0.091950017 container cleanup b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:00 localhost systemd[1]: libpod-conmon-b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5.scope: Deactivated successfully. Oct 14 06:17:00 localhost podman[337869]: 2025-10-14 10:17:00.209932381 +0000 UTC m=+0.144351921 container remove b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:17:00 localhost ovn_controller[156286]: 2025-10-14T10:17:00Z|00328|binding|INFO|Releasing lport d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 from this chassis (sb_readonly=0) Oct 14 06:17:00 localhost nova_compute[295778]: 2025-10-14 10:17:00.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:00 localhost kernel: device tapd0d9188c-4f left promiscuous mode Oct 14 06:17:00 localhost ovn_controller[156286]: 2025-10-14T10:17:00Z|00329|binding|INFO|Setting lport d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 down in Southbound Oct 14 06:17:00 localhost systemd[1]: var-lib-containers-storage-overlay-36b6f61f5dbebb144c3a5739516977a66fb986436ee65ad78162e32ab8206a8a-merged.mount: Deactivated successfully. Oct 14 06:17:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b66977f282287d28dfada9ac53d640c3ef1eb9d8afbd9c751f64269b802f4ca5-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:00.273 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0d9188c-4f5a-44bd-9e0a-daca8d51cf91) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:00.275 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d0d9188c-4f5a-44bd-9e0a-daca8d51cf91 in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:17:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:00.278 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:00 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:00.279 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[72be4b6c-c88f-4c61-926e-dc8e75200ad4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:00 localhost nova_compute[295778]: 2025-10-14 10:17:00.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:00 localhost nova_compute[295778]: 2025-10-14 10:17:00.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:00.555 270389 INFO neutron.agent.dhcp.agent [None req-788d9578-c764-4749-9007-995a2c92b0f8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:00 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:00.555 270389 INFO neutron.agent.dhcp.agent [None req-788d9578-c764-4749-9007-995a2c92b0f8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:00 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:17:00 localhost podman[246584]: time="2025-10-14T10:17:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:17:00 localhost podman[246584]: @ - - [14/Oct/2025:10:17:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:17:00 localhost podman[246584]: @ - - [14/Oct/2025:10:17:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18881 "" "Go-http-client/1.1" Oct 14 06:17:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 238 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 56 op/s Oct 14 06:17:01 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:01.629 270389 INFO neutron.agent.linux.ip_lib [None req-c1d95ae0-fda5-4646-a628-91777769ce7e - - - - - -] Device tap72b8f6e4-5b cannot be used as it has no MAC address#033[00m Oct 14 06:17:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:01.637 2 INFO neutron.agent.securitygroups_rpc [None req-11f572e0-ac9f-415d-bc0f-f3f90ef6adc7 23c87f3e6fcf4e92b503a3545c69b885 bc139a195b1a4766b00c4bbfdffdb9e3 - - default default] Security group member updated ['a20ff476-7b51-48c6-a80f-bb88f6adeae7']#033[00m Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost kernel: device tap72b8f6e4-5b entered promiscuous mode Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost NetworkManager[5972]: [1760437021.6632] manager: (tap72b8f6e4-5b): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Oct 14 06:17:01 localhost ovn_controller[156286]: 2025-10-14T10:17:01Z|00330|binding|INFO|Claiming lport 72b8f6e4-5ba3-438e-afab-2731847eecef for this chassis. Oct 14 06:17:01 localhost ovn_controller[156286]: 2025-10-14T10:17:01Z|00331|binding|INFO|72b8f6e4-5ba3-438e-afab-2731847eecef: Claiming unknown Oct 14 06:17:01 localhost systemd-udevd[337899]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:01 localhost ovn_controller[156286]: 2025-10-14T10:17:01Z|00332|binding|INFO|Setting lport 72b8f6e4-5ba3-438e-afab-2731847eecef ovn-installed in OVS Oct 14 06:17:01 localhost ovn_controller[156286]: 2025-10-14T10:17:01Z|00333|binding|INFO|Setting lport 72b8f6e4-5ba3-438e-afab-2731847eecef up in Southbound Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:01.677 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe89:980b/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=72b8f6e4-5ba3-438e-afab-2731847eecef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:01.681 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 72b8f6e4-5ba3-438e-afab-2731847eecef in datapath 74049e43-4aa7-4318-9233-a58980c3495b bound to our chassis#033[00m Oct 14 06:17:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:01.686 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 095025a5-9c3f-4734-b1d9-1425872c6dca IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:17:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:01.686 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:01 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:01.687 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bb1ed7-9b58-4670-8e0d-dcca5942085f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost journal[236030]: ethtool ioctl error on tap72b8f6e4-5b: No such device Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost nova_compute[295778]: 2025-10-14 10:17:01.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:01.868 2 INFO neutron.agent.securitygroups_rpc [None req-0ccc5aba-1212-45b4-a10a-2b1f47c48b5c 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:02.530 2 INFO neutron.agent.securitygroups_rpc [None req-2ef3d09c-5701-40ee-9138-ed88eafd4701 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e137 do_prune osdmap full prune enabled Oct 14 06:17:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e138 e138: 6 total, 6 up, 6 in Oct 14 06:17:02 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e138: 6 total, 6 up, 6 in Oct 14 06:17:02 localhost podman[337970]: Oct 14 06:17:02 localhost podman[337970]: 2025-10-14 10:17:02.650678954 +0000 UTC m=+0.100691650 container create ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:02 localhost systemd[1]: Started libpod-conmon-ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb.scope. Oct 14 06:17:02 localhost podman[337970]: 2025-10-14 10:17:02.600597122 +0000 UTC m=+0.050609838 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:02 localhost systemd[1]: Started libcrun container. Oct 14 06:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0e418c41b72ecd9e863b4a40d3f29baa412ad77ccdaf176e6fa735dddd708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:02 localhost podman[337970]: 2025-10-14 10:17:02.734933796 +0000 UTC m=+0.184946502 container init ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 14 06:17:02 localhost podman[337970]: 2025-10-14 10:17:02.743770571 +0000 UTC m=+0.193783347 container start ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:17:02 localhost dnsmasq[337988]: started, version 2.85 cachesize 150 Oct 14 06:17:02 localhost dnsmasq[337988]: DNS service limited to local subnets Oct 14 06:17:02 localhost dnsmasq[337988]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:02 localhost dnsmasq[337988]: warning: no upstream servers configured Oct 14 06:17:02 localhost dnsmasq[337988]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:02.922 270389 INFO neutron.agent.dhcp.agent [None req-00e514e1-a716-4642-a682-ef0fb06c3c1c - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:17:03 localhost dnsmasq[337988]: exiting on receipt of SIGTERM Oct 14 06:17:03 localhost podman[338005]: 2025-10-14 10:17:03.117940345 +0000 UTC m=+0.063471910 container kill ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:17:03 localhost systemd[1]: libpod-ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb.scope: Deactivated successfully. Oct 14 06:17:03 localhost podman[338020]: 2025-10-14 10:17:03.192427886 +0000 UTC m=+0.055276551 container died ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:03 localhost podman[338020]: 2025-10-14 10:17:03.237267759 +0000 UTC m=+0.100116424 container remove ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:17:03 localhost systemd[1]: libpod-conmon-ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb.scope: Deactivated successfully. Oct 14 06:17:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 238 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 54 op/s Oct 14 06:17:03 localhost openstack_network_exporter[248748]: ERROR 10:17:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:17:03 localhost openstack_network_exporter[248748]: ERROR 10:17:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:17:03 localhost openstack_network_exporter[248748]: ERROR 10:17:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:17:03 localhost openstack_network_exporter[248748]: ERROR 10:17:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:17:03 localhost openstack_network_exporter[248748]: Oct 14 06:17:03 localhost nova_compute[295778]: 2025-10-14 10:17:03.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:03 localhost openstack_network_exporter[248748]: ERROR 10:17:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:17:03 localhost openstack_network_exporter[248748]: Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.525 270389 INFO neutron.agent.dhcp.agent [None req-46c6e309-2734-40b7-bfdd-6c6b1efab9c8 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent [None req-803359db-6484-4564-a40d-1387f6ae907a - - - - - -] Unable to restart dhcp for 74049e43-4aa7-4318-9233-a58980c3495b.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 855083a7-9717-457c-8f88-df45b57a83ea: This subnet is being modified by another concurrent operation. Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n return self._port_action(plugin, context, port, \'update_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n return plugin.update_port(context, port[\'id\'], port)\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n updated_port = super(Ml2Plugin, self).update_port(context, id,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n self.ipam.update_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n changes = self.update_port_with_ips(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n changes = self._update_ips_for_port(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n subnets = self._ipam_get_subnets(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n subnet.read_lock_register(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 855083a7-9717-457c-8f88-df45b57a83ea: This subnet is being modified by another concurrent operation.\n']. Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 207, in restart Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent self.enable() Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent common_utils.wait_until_true(self._enable, timeout=300) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent while not predicate(): Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent interface_name = self.device_manager.setup( Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent self.cleanup_stale_devices(network, dhcp_port=None) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent self.force_reraise() Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent raise self.value Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent port = self.setup_dhcp_port(network, segment) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent dhcp_port = setup_method(network, device_id, dhcp_subnets) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1660, in _setup_existing_dhcp_port Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent port = self.plugin.update_dhcp_port( Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 901, in update_dhcp_port Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent port = cctxt.call(self.context, 'update_dhcp_port', Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent return self._original_context.call(ctxt, method, **kwargs) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent result = self.transport._send( Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent return self._driver.send(target, ctxt, message, Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent return self._send(target, ctxt, message, wait_for_reply, timeout, Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent raise result Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 855083a7-9717-457c-8f88-df45b57a83ea: This subnet is being modified by another concurrent operation. Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n return self._port_action(plugin, context, port, \'update_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n return plugin.update_port(context, port[\'id\'], port)\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n updated_port = super(Ml2Plugin, self).update_port(context, id,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n self.ipam.update_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n changes = self.update_port_with_ips(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n changes = self._update_ips_for_port(context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n subnets = self._ipam_get_subnets(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n subnet.read_lock_register(\n', ' File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 855083a7-9717-457c-8f88-df45b57a83ea: This subnet is being modified by another concurrent operation.\n']. Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.645 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.652 270389 INFO neutron.agent.dhcp.agent [None req-288f7abf-64c4-497e-9f96-6f462a2a409b - - - - - -] Synchronizing state#033[00m Oct 14 06:17:03 localhost systemd[1]: var-lib-containers-storage-overlay-4ec0e418c41b72ecd9e863b4a40d3f29baa412ad77ccdaf176e6fa735dddd708-merged.mount: Deactivated successfully. Oct 14 06:17:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed4b782a3c482bd4173bf4b2c29ee2d36b4f5b49bf21480eb804898eb94678fb-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:17:03 localhost systemd[1]: tmp-crun.VVfY7n.mount: Deactivated successfully. Oct 14 06:17:03 localhost podman[338048]: 2025-10-14 10:17:03.783950123 +0000 UTC m=+0.095739368 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6) Oct 14 06:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:17:03 localhost podman[338048]: 2025-10-14 10:17:03.82633515 +0000 UTC m=+0.138124355 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal) Oct 14 06:17:03 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.845 270389 INFO neutron.agent.dhcp.agent [None req-41d33aec-6db6-4751-9b68-cd9b05ead21b - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:03 localhost podman[338069]: 2025-10-14 10:17:03.894463023 +0000 UTC m=+0.088515666 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:17:03 localhost podman[338069]: 2025-10-14 10:17:03.907469369 +0000 UTC m=+0.101522012 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.911 270389 INFO neutron.agent.dhcp.agent [None req-c9cf2e1f-f89a-45fa-a9d8-972ff4cdaa86 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.912 270389 INFO neutron.agent.dhcp.agent [-] Starting network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.913 270389 INFO neutron.agent.dhcp.agent [-] Finished network 74049e43-4aa7-4318-9233-a58980c3495b dhcp configuration#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.913 270389 INFO neutron.agent.dhcp.agent [-] Starting network 9f8f1ec2-0a31-401b-a39d-d18b4b974195 dhcp configuration#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.913 270389 INFO neutron.agent.dhcp.agent [-] Finished network 9f8f1ec2-0a31-401b-a39d-d18b4b974195 dhcp configuration#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.914 270389 INFO neutron.agent.dhcp.agent [None req-c9cf2e1f-f89a-45fa-a9d8-972ff4cdaa86 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.915 270389 INFO neutron.agent.dhcp.agent [None req-803359db-6484-4564-a40d-1387f6ae907a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:01Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=4e0b4bbc-367b-4e9b-99bc-4ded09f3af01, ip_allocation=immediate, mac_address=fa:16:3e:3e:6b:21, name=tempest-NetworksTestDHCPv6-812016795, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=49, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['271cb69c-bedf-4710-982d-677658a3b893', '855083a7-9717-457c-8f88-df45b57a83ea'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:01Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2219, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:01Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:17:03 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:17:03 localhost podman[338068]: 2025-10-14 10:17:03.991799893 +0000 UTC m=+0.188605639 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:17:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:03.996 270389 INFO neutron.agent.dhcp.agent [None req-648835db-4e80-4fc0-bfa0-263ee254a2bc - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155'} is completed#033[00m Oct 14 06:17:04 localhost podman[338068]: 2025-10-14 10:17:04.060337626 +0000 UTC m=+0.257143382 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:17:04 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:17:04 localhost podman[338143]: Oct 14 06:17:04 localhost podman[338143]: 2025-10-14 10:17:04.376655171 +0000 UTC m=+0.087586951 container create 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:17:04 localhost systemd[1]: Started libpod-conmon-521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa.scope. Oct 14 06:17:04 localhost systemd[1]: Started libcrun container. Oct 14 06:17:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbdf378cbc3b1b6a7bfcd52574a7da571a0bf67299cd40a5bec065ec9836a27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:04 localhost podman[338143]: 2025-10-14 10:17:04.336856143 +0000 UTC m=+0.047787963 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:04 localhost podman[338143]: 2025-10-14 10:17:04.441482516 +0000 UTC m=+0.152414306 container init 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:04 localhost podman[338143]: 2025-10-14 10:17:04.449582761 +0000 UTC m=+0.160514551 container start 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:17:04 localhost dnsmasq[338161]: started, version 2.85 cachesize 150 Oct 14 06:17:04 localhost dnsmasq[338161]: DNS service limited to local subnets Oct 14 06:17:04 localhost dnsmasq[338161]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:04 localhost dnsmasq[338161]: warning: no upstream servers configured Oct 14 06:17:04 localhost dnsmasq[338161]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:17:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e138 do_prune osdmap full prune enabled Oct 14 06:17:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e139 e139: 6 total, 6 up, 6 in Oct 14 06:17:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e139: 6 total, 6 up, 6 in Oct 14 06:17:04 localhost systemd[1]: tmp-crun.gnX2hB.mount: Deactivated successfully. Oct 14 06:17:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:04.722 270389 INFO neutron.agent.dhcp.agent [None req-5c2ae0e8-c774-473f-8252-c10edcffec2a - - - - - -] DHCP configuration for ports {'4e0b4bbc-367b-4e9b-99bc-4ded09f3af01'} is completed#033[00m Oct 14 06:17:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e139 do_prune osdmap full prune enabled Oct 14 06:17:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e140 e140: 6 total, 6 up, 6 in Oct 14 06:17:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e140: 6 total, 6 up, 6 in Oct 14 06:17:04 localhost podman[338187]: 2025-10-14 10:17:04.8539973 +0000 UTC m=+0.068823221 container kill 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:17:04 localhost dnsmasq[338161]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:17:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/871175732' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:17:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:17:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/871175732' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:17:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 238 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 201 op/s Oct 14 06:17:05 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:05.460 2 INFO neutron.agent.securitygroups_rpc [None req-575abab5-2783-4770-9528-99d22d30e6e1 23c87f3e6fcf4e92b503a3545c69b885 bc139a195b1a4766b00c4bbfdffdb9e3 - - default default] Security group member updated ['a20ff476-7b51-48c6-a80f-bb88f6adeae7']#033[00m Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost dnsmasq[338161]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:05 localhost podman[338226]: 2025-10-14 10:17:05.607305081 +0000 UTC m=+0.060870841 container kill 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:17:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:05.744 270389 INFO neutron.agent.linux.ip_lib [None req-a296c14d-611e-44a8-a5c9-41c4956e76f2 - - - - - -] Device tape9832f1b-8b cannot be used as it has no MAC address#033[00m Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost kernel: device tape9832f1b-8b entered promiscuous mode Oct 14 06:17:05 localhost NetworkManager[5972]: [1760437025.7831] manager: (tape9832f1b-8b): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Oct 14 06:17:05 localhost ovn_controller[156286]: 2025-10-14T10:17:05Z|00334|binding|INFO|Claiming lport e9832f1b-8b89-48dc-9c1e-3e75c924fce0 for this chassis. Oct 14 06:17:05 localhost ovn_controller[156286]: 2025-10-14T10:17:05Z|00335|binding|INFO|e9832f1b-8b89-48dc-9c1e-3e75c924fce0: Claiming unknown Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost systemd-udevd[338255]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:05.799 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe25:8b0e/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-9f8f1ec2-0a31-401b-a39d-d18b4b974195', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f8f1ec2-0a31-401b-a39d-d18b4b974195', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc139a195b1a4766b00c4bbfdffdb9e3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f42a2d3a-9d7d-4fa7-a462-9eef6faa0911, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e9832f1b-8b89-48dc-9c1e-3e75c924fce0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:05.801 161932 INFO neutron.agent.ovn.metadata.agent [-] Port e9832f1b-8b89-48dc-9c1e-3e75c924fce0 in datapath 9f8f1ec2-0a31-401b-a39d-d18b4b974195 bound to our chassis#033[00m Oct 14 06:17:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:05.804 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9cbf9dae-8484-4220-a92d-a206f1aa323e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:17:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:05.805 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f8f1ec2-0a31-401b-a39d-d18b4b974195, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:05 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:05.806 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b5313229-ddbc-4c39-b1b4-1aa02920eece]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost ovn_controller[156286]: 2025-10-14T10:17:05Z|00336|binding|INFO|Setting lport e9832f1b-8b89-48dc-9c1e-3e75c924fce0 ovn-installed in OVS Oct 14 06:17:05 localhost ovn_controller[156286]: 2025-10-14T10:17:05Z|00337|binding|INFO|Setting lport e9832f1b-8b89-48dc-9c1e-3e75c924fce0 up in Southbound Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost journal[236030]: ethtool ioctl error on tape9832f1b-8b: No such device Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost nova_compute[295778]: 2025-10-14 10:17:05.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:05.970 270389 INFO neutron.agent.dhcp.agent [None req-bab494b6-36aa-4882-8b9d-18b3ef71cb05 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:06 localhost podman[338310]: 2025-10-14 10:17:06.230495691 +0000 UTC m=+0.069314606 container kill 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:17:06 localhost dnsmasq[338161]: exiting on receipt of SIGTERM Oct 14 06:17:06 localhost systemd[1]: libpod-521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa.scope: Deactivated successfully. Oct 14 06:17:06 localhost podman[338328]: 2025-10-14 10:17:06.30456091 +0000 UTC m=+0.053228977 container died 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:17:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:06 localhost podman[338328]: 2025-10-14 10:17:06.356693548 +0000 UTC m=+0.105361585 container remove 521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:06 localhost systemd[1]: libpod-conmon-521f48d1ee0515899a598e8f5dfe8872f455dffa6e5cce6818b465614f77bbaa.scope: Deactivated successfully. Oct 14 06:17:06 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:06.635 2 INFO neutron.agent.securitygroups_rpc [None req-6897e8e4-d926-40ba-b8d6-d86b71a1c265 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e140 do_prune osdmap full prune enabled Oct 14 06:17:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e141 e141: 6 total, 6 up, 6 in Oct 14 06:17:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e141: 6 total, 6 up, 6 in Oct 14 06:17:06 localhost podman[338382]: Oct 14 06:17:06 localhost podman[338382]: 2025-10-14 10:17:06.818404141 +0000 UTC m=+0.108732224 container create db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:17:06 localhost podman[338382]: 2025-10-14 10:17:06.763090929 +0000 UTC m=+0.053419032 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:06 localhost systemd[1]: Started libpod-conmon-db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176.scope. Oct 14 06:17:06 localhost systemd[1]: Started libcrun container. Oct 14 06:17:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752925d796ef001ad803c13eaa58d65c93e45f1a7f3ca1d8aefc7febf7dfa114/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:06 localhost podman[338382]: 2025-10-14 10:17:06.894743972 +0000 UTC m=+0.185072045 container init db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:06 localhost podman[338382]: 2025-10-14 10:17:06.904603034 +0000 UTC m=+0.194931107 container start db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:17:06 localhost dnsmasq[338407]: started, version 2.85 cachesize 150 Oct 14 06:17:06 localhost dnsmasq[338407]: DNS service limited to local subnets Oct 14 06:17:06 localhost dnsmasq[338407]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:06 localhost dnsmasq[338407]: warning: no upstream servers configured Oct 14 06:17:06 localhost dnsmasq-dhcp[338407]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:06 localhost dnsmasq[338407]: read /var/lib/neutron/dhcp/9f8f1ec2-0a31-401b-a39d-d18b4b974195/addn_hosts - 0 addresses Oct 14 06:17:06 localhost dnsmasq-dhcp[338407]: read /var/lib/neutron/dhcp/9f8f1ec2-0a31-401b-a39d-d18b4b974195/host Oct 14 06:17:06 localhost dnsmasq-dhcp[338407]: read /var/lib/neutron/dhcp/9f8f1ec2-0a31-401b-a39d-d18b4b974195/opts Oct 14 06:17:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:07.145 270389 INFO neutron.agent.dhcp.agent [None req-425261db-85c5-41c2-889b-18a9620d8398 - - - - - -] DHCP configuration for ports {'ad64bb49-d70c-4834-b64c-451806ba10d5'} is completed#033[00m Oct 14 06:17:07 localhost systemd[1]: var-lib-containers-storage-overlay-2bbdf378cbc3b1b6a7bfcd52574a7da571a0bf67299cd40a5bec065ec9836a27-merged.mount: Deactivated successfully. Oct 14 06:17:07 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:07.247 2 INFO neutron.agent.securitygroups_rpc [None req-24eae660-5697-45be-9e02-e6810e5b8e17 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:07 localhost podman[338440]: 2025-10-14 10:17:07.285507887 +0000 UTC m=+0.066738747 container kill db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:17:07 localhost dnsmasq[338407]: exiting on receipt of SIGTERM Oct 14 06:17:07 localhost systemd[1]: libpod-db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176.scope: Deactivated successfully. Oct 14 06:17:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 238 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 4.5 MiB/s wr, 187 op/s Oct 14 06:17:07 localhost podman[338456]: 2025-10-14 10:17:07.364828128 +0000 UTC m=+0.062541616 container died db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:17:07 localhost podman[338456]: 2025-10-14 10:17:07.39348894 +0000 UTC m=+0.091202398 container cleanup db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:17:07 localhost systemd[1]: libpod-conmon-db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176.scope: Deactivated successfully. Oct 14 06:17:07 localhost podman[338459]: 2025-10-14 10:17:07.434943893 +0000 UTC m=+0.127107863 container remove db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9f8f1ec2-0a31-401b-a39d-d18b4b974195, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:17:07 localhost ovn_controller[156286]: 2025-10-14T10:17:07Z|00338|binding|INFO|Releasing lport e9832f1b-8b89-48dc-9c1e-3e75c924fce0 from this chassis (sb_readonly=0) Oct 14 06:17:07 localhost nova_compute[295778]: 2025-10-14 10:17:07.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:07 localhost ovn_controller[156286]: 2025-10-14T10:17:07Z|00339|binding|INFO|Setting lport e9832f1b-8b89-48dc-9c1e-3e75c924fce0 down in Southbound Oct 14 06:17:07 localhost kernel: device tape9832f1b-8b left promiscuous mode Oct 14 06:17:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:07.455 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-9f8f1ec2-0a31-401b-a39d-d18b4b974195', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f8f1ec2-0a31-401b-a39d-d18b4b974195', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc139a195b1a4766b00c4bbfdffdb9e3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f42a2d3a-9d7d-4fa7-a462-9eef6faa0911, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e9832f1b-8b89-48dc-9c1e-3e75c924fce0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:07.458 161932 INFO neutron.agent.ovn.metadata.agent [-] Port e9832f1b-8b89-48dc-9c1e-3e75c924fce0 in datapath 9f8f1ec2-0a31-401b-a39d-d18b4b974195 unbound from our chassis#033[00m Oct 14 06:17:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:07.461 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f8f1ec2-0a31-401b-a39d-d18b4b974195, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:07 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:07.462 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[e92e8bb1-a38b-4e3e-a3f0-5a59f3eb55ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:07 localhost nova_compute[295778]: 2025-10-14 10:17:07.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:07.690 270389 INFO neutron.agent.dhcp.agent [None req-f5c47021-0dce-4b15-b5da-19529308f09d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:07 localhost podman[338511]: Oct 14 06:17:07 localhost podman[338511]: 2025-10-14 10:17:07.707107504 +0000 UTC m=+0.098033669 container create 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:07 localhost systemd[1]: Started libpod-conmon-17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115.scope. Oct 14 06:17:07 localhost podman[338511]: 2025-10-14 10:17:07.654979727 +0000 UTC m=+0.045905922 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:07 localhost systemd[1]: Started libcrun container. Oct 14 06:17:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69c1d9ff29c0b7b22e5c4690fd458481ed65e0dbbfe839870a812fd999fe406/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:07 localhost podman[338511]: 2025-10-14 10:17:07.774034044 +0000 UTC m=+0.164960199 container init 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:07 localhost podman[338511]: 2025-10-14 10:17:07.782994222 +0000 UTC m=+0.173920387 container start 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:17:07 localhost dnsmasq[338529]: started, version 2.85 cachesize 150 Oct 14 06:17:07 localhost dnsmasq[338529]: DNS service limited to local subnets Oct 14 06:17:07 localhost dnsmasq[338529]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:07 localhost dnsmasq[338529]: warning: no upstream servers configured Oct 14 06:17:07 localhost dnsmasq-dhcp[338529]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:07 localhost dnsmasq-dhcp[338529]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:17:07 localhost dnsmasq[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:07 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:07 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:07 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:07.847 270389 INFO neutron.agent.dhcp.agent [None req-e7482758-48f0-4729-9659-e2d5e683dff0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:06Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=1b7f19bc-4a05-490a-9a83-99a4b7f73a97, ip_allocation=immediate, mac_address=fa:16:3e:31:1e:5d, name=tempest-NetworksTestDHCPv6-312527459, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=53, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['26465912-6a92-4325-99d1-196408f5d31c', 'fb40faa5-7e03-48ba-bd06-211e7b408e87'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:05Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2247, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:06Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:17:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e141 do_prune osdmap full prune enabled Oct 14 06:17:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e142 e142: 6 total, 6 up, 6 in Oct 14 06:17:07 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e142: 6 total, 6 up, 6 in Oct 14 06:17:08 localhost dnsmasq[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:17:08 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:08 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:08 localhost podman[338550]: 2025-10-14 10:17:08.075617367 +0000 UTC m=+0.074555634 container kill 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:17:08 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:08.087 270389 INFO neutron.agent.dhcp.agent [None req-df77f029-e5d6-47f7-a751-1f70f9a58ef8 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:08 localhost systemd[1]: var-lib-containers-storage-overlay-752925d796ef001ad803c13eaa58d65c93e45f1a7f3ca1d8aefc7febf7dfa114-merged.mount: Deactivated successfully. Oct 14 06:17:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db2a96c5005be68c95b591e8a0062093e9f201f1f859170b58680e2273c57176-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:08 localhost systemd[1]: run-netns-qdhcp\x2d9f8f1ec2\x2d0a31\x2d401b\x2da39d\x2dd18b4b974195.mount: Deactivated successfully. Oct 14 06:17:08 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:08.299 270389 INFO neutron.agent.dhcp.agent [None req-ff76b21b-42a9-498c-89e7-bddbfde3b520 - - - - - -] DHCP configuration for ports {'1b7f19bc-4a05-490a-9a83-99a4b7f73a97'} is completed#033[00m Oct 14 06:17:08 localhost nova_compute[295778]: 2025-10-14 10:17:08.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:08 localhost dnsmasq[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:08 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:08 localhost podman[338588]: 2025-10-14 10:17:08.477979392 +0000 UTC m=+0.059034732 container kill 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:08 localhost dnsmasq-dhcp[338529]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:09 localhost dnsmasq[338529]: exiting on receipt of SIGTERM Oct 14 06:17:09 localhost podman[338625]: 2025-10-14 10:17:09.070395331 +0000 UTC m=+0.072525699 container kill 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:09 localhost systemd[1]: libpod-17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115.scope: Deactivated successfully. Oct 14 06:17:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:17:09 Oct 14 06:17:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:17:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:17:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'manila_metadata', 'manila_data', 'backups', 'images'] Oct 14 06:17:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:17:09 localhost podman[338637]: 2025-10-14 10:17:09.13006852 +0000 UTC m=+0.048133352 container died 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:17:09 localhost podman[338637]: 2025-10-14 10:17:09.150909964 +0000 UTC m=+0.068974786 container cleanup 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:17:09 localhost systemd[1]: libpod-conmon-17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115.scope: Deactivated successfully. Oct 14 06:17:09 localhost podman[338644]: 2025-10-14 10:17:09.167567887 +0000 UTC m=+0.069593963 container remove 17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:09 localhost systemd[1]: tmp-crun.CeI4n0.mount: Deactivated successfully. Oct 14 06:17:09 localhost systemd[1]: var-lib-containers-storage-overlay-c69c1d9ff29c0b7b22e5c4690fd458481ed65e0dbbfe839870a812fd999fe406-merged.mount: Deactivated successfully. Oct 14 06:17:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17ce4d46bd437eb923e9365158a49b0a5c13c6d32142b4d1618bf37e8f259115-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 238 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.6 MiB/s wr, 188 op/s Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014843822701470315 of space, bias 1.0, pg target 0.2963816599393573 quantized to 32 (current 32) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0014839278859575657 of space, bias 1.0, pg target 0.2953016493055556 quantized to 32 (current 32) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:17:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:17:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:17:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e142 do_prune osdmap full prune enabled Oct 14 06:17:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e143 e143: 6 total, 6 up, 6 in Oct 14 06:17:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e143: 6 total, 6 up, 6 in Oct 14 06:17:10 localhost podman[338714]: Oct 14 06:17:10 localhost podman[338714]: 2025-10-14 10:17:10.122556753 +0000 UTC m=+0.101321156 container create 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:17:10 localhost systemd[1]: Started libpod-conmon-5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d.scope. Oct 14 06:17:10 localhost podman[338714]: 2025-10-14 10:17:10.078505191 +0000 UTC m=+0.057269644 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:10 localhost systemd[1]: Started libcrun container. Oct 14 06:17:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ca7d7eaf26cef2eb6f2ad12ca4c2e0f51223f4698686dae45c5e521e76151/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:10 localhost podman[338714]: 2025-10-14 10:17:10.200007454 +0000 UTC m=+0.178771867 container init 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:17:10 localhost podman[338714]: 2025-10-14 10:17:10.208563622 +0000 UTC m=+0.187328035 container start 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:17:10 localhost dnsmasq[338732]: started, version 2.85 cachesize 150 Oct 14 06:17:10 localhost dnsmasq[338732]: DNS service limited to local subnets Oct 14 06:17:10 localhost dnsmasq[338732]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:10 localhost dnsmasq[338732]: warning: no upstream servers configured Oct 14 06:17:10 localhost dnsmasq-dhcp[338732]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:10 localhost dnsmasq[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:10 localhost dnsmasq-dhcp[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:10 localhost dnsmasq-dhcp[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:10 localhost nova_compute[295778]: 2025-10-14 10:17:10.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:10 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:10.602 270389 INFO neutron.agent.dhcp.agent [None req-00209efe-89cb-4a9a-a03b-3fbbf28a8b2e - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:10 localhost podman[338750]: 2025-10-14 10:17:10.723954223 +0000 UTC m=+0.061778694 container kill 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:17:10 localhost dnsmasq[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:10 localhost dnsmasq-dhcp[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:10 localhost dnsmasq-dhcp[338732]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:10 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:10.862 2 INFO neutron.agent.securitygroups_rpc [None req-20821d15-e078-46e7-8405-8376b39a40c2 4c194ea59b244432a9ec5417b8101ebe 5ac8b4aa702a449b8bf4a8039f977fc5 - - default default] Security group rule updated ['8fe43e8a-a14a-430f-ba7d-c6a0fef96a1b']#033[00m Oct 14 06:17:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:11.065 2 INFO neutron.agent.securitygroups_rpc [None req-e3edfc3e-0d01-4121-a932-2e3facb8956f 4c194ea59b244432a9ec5417b8101ebe 5ac8b4aa702a449b8bf4a8039f977fc5 - - default default] Security group rule updated ['8fe43e8a-a14a-430f-ba7d-c6a0fef96a1b']#033[00m Oct 14 06:17:11 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:11.100 270389 INFO neutron.agent.dhcp.agent [None req-01106f87-63af-484c-b32d-9bde9cb0f31e - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:11 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:11.167 270389 INFO neutron.agent.linux.ip_lib [None req-c5be0063-90c8-48b6-b234-84c5f2e0d422 - - - - - -] Device tap45c86c3a-a4 cannot be used as it has no MAC address#033[00m Oct 14 06:17:11 localhost nova_compute[295778]: 2025-10-14 10:17:11.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:11 localhost kernel: device tap45c86c3a-a4 entered promiscuous mode Oct 14 06:17:11 localhost ovn_controller[156286]: 2025-10-14T10:17:11Z|00340|binding|INFO|Claiming lport 45c86c3a-a4bc-4993-ab94-9ed7f45b600e for this chassis. Oct 14 06:17:11 localhost NetworkManager[5972]: [1760437031.1965] manager: (tap45c86c3a-a4): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Oct 14 06:17:11 localhost ovn_controller[156286]: 2025-10-14T10:17:11Z|00341|binding|INFO|45c86c3a-a4bc-4993-ab94-9ed7f45b600e: Claiming unknown Oct 14 06:17:11 localhost nova_compute[295778]: 2025-10-14 10:17:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:11 localhost systemd-udevd[338780]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:11.207 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d9271a92-3911-4388-b25f-4ca78b313dd4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9271a92-3911-4388-b25f-4ca78b313dd4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a8ee99608b94600b463f14d4902f3b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93235c55-787b-4f31-b9d2-c7db8ed46611, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=45c86c3a-a4bc-4993-ab94-9ed7f45b600e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:11.210 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 45c86c3a-a4bc-4993-ab94-9ed7f45b600e in datapath d9271a92-3911-4388-b25f-4ca78b313dd4 bound to our chassis#033[00m Oct 14 06:17:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:11.215 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port f0482aaf-e436-41f2-b3da-97a8cec07beb IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:17:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:11.215 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9271a92-3911-4388-b25f-4ca78b313dd4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:11.219 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[4747f082-3499-4c17-b454-fbcb26ef9cd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost ovn_controller[156286]: 2025-10-14T10:17:11Z|00342|binding|INFO|Setting lport 45c86c3a-a4bc-4993-ab94-9ed7f45b600e ovn-installed in OVS Oct 14 06:17:11 localhost ovn_controller[156286]: 2025-10-14T10:17:11Z|00343|binding|INFO|Setting lport 45c86c3a-a4bc-4993-ab94-9ed7f45b600e up in Southbound Oct 14 06:17:11 localhost nova_compute[295778]: 2025-10-14 10:17:11.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:11.247 2 INFO neutron.agent.securitygroups_rpc [None req-0e84d79c-299b-48cc-b1f2-14fae809fee1 89ecba9e60ab4ed4b2a8f801d81075be 0a8ee99608b94600b463f14d4902f3b7 - - default default] Security group member updated ['1e825526-ca45-4d75-b345-f72249726766']#033[00m Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost journal[236030]: ethtool ioctl error on tap45c86c3a-a4: No such device Oct 14 06:17:11 localhost nova_compute[295778]: 2025-10-14 10:17:11.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 7.5 KiB/s wr, 146 op/s Oct 14 06:17:11 localhost nova_compute[295778]: 2025-10-14 10:17:11.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:12 localhost podman[338851]: Oct 14 06:17:12 localhost podman[338851]: 2025-10-14 10:17:12.204924791 +0000 UTC m=+0.093153028 container create 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2) Oct 14 06:17:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:12.222 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8:0:1:f816:3eff:fe63:b489'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bb90059a-750e-43da-ba16-03b3dce8c155) old=Port_Binding(mac=['fa:16:3e:63:b4:89 2001:db8::f816:3eff:fe63:b489'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe63:b489/64', 'neutron:device_id': 'ovnmeta-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:12.223 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bb90059a-750e-43da-ba16-03b3dce8c155 in datapath 74049e43-4aa7-4318-9233-a58980c3495b updated#033[00m Oct 14 06:17:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:12.227 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 095025a5-9c3f-4734-b1d9-1425872c6dca IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:17:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:12.227 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:12 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:12.229 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b36d3e0a-d62b-42d2-9a20-903b6a3e74ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:12 localhost systemd[1]: Started libpod-conmon-1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23.scope. Oct 14 06:17:12 localhost podman[338851]: 2025-10-14 10:17:12.157010707 +0000 UTC m=+0.045238924 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:12 localhost systemd[1]: Started libcrun container. Oct 14 06:17:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb440e88c49f4ab48bdbfd706e54e00769191314025f624e8f87dafb02204af9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:12 localhost podman[338851]: 2025-10-14 10:17:12.284060498 +0000 UTC m=+0.172288705 container init 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:17:12 localhost podman[338851]: 2025-10-14 10:17:12.293236791 +0000 UTC m=+0.181465008 container start 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:17:12 localhost dnsmasq[338869]: started, version 2.85 cachesize 150 Oct 14 06:17:12 localhost dnsmasq[338869]: DNS service limited to local subnets Oct 14 06:17:12 localhost dnsmasq[338869]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:12 localhost dnsmasq[338869]: warning: no upstream servers configured Oct 14 06:17:12 localhost dnsmasq-dhcp[338869]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:17:12 localhost dnsmasq[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/addn_hosts - 0 addresses Oct 14 06:17:12 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/host Oct 14 06:17:12 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/opts Oct 14 06:17:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:12.358 270389 INFO neutron.agent.dhcp.agent [None req-e7e05a2b-233b-421d-9478-7709f8956745 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:10Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c16f61d9-5f2d-439f-bb78-66f1a126cd7a, ip_allocation=immediate, mac_address=fa:16:3e:fe:08:0a, name=tempest-TagsExtTest-1093302126, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:17:07Z, description=, dns_domain=, id=d9271a92-3911-4388-b25f-4ca78b313dd4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-343480907, port_security_enabled=True, project_id=0a8ee99608b94600b463f14d4902f3b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45721, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2262, status=ACTIVE, subnets=['dcedd45e-c1dd-4c47-87fc-388636f0620e'], tags=[], tenant_id=0a8ee99608b94600b463f14d4902f3b7, updated_at=2025-10-14T10:17:09Z, vlan_transparent=None, network_id=d9271a92-3911-4388-b25f-4ca78b313dd4, port_security_enabled=True, project_id=0a8ee99608b94600b463f14d4902f3b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['1e825526-ca45-4d75-b345-f72249726766'], standard_attr_id=2280, status=DOWN, tags=[], tenant_id=0a8ee99608b94600b463f14d4902f3b7, updated_at=2025-10-14T10:17:11Z on network d9271a92-3911-4388-b25f-4ca78b313dd4#033[00m Oct 14 06:17:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:12.465 270389 INFO neutron.agent.dhcp.agent [None req-f199e867-66de-41bf-bdb7-2873e82ad1da - - - - - -] DHCP configuration for ports {'42e5284d-331b-4b7f-a5dd-9a8c5dd62fe5'} is completed#033[00m Oct 14 06:17:12 localhost dnsmasq[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/addn_hosts - 1 addresses Oct 14 06:17:12 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/host Oct 14 06:17:12 localhost podman[338901]: 2025-10-14 10:17:12.608697053 +0000 UTC m=+0.053044801 container kill 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:12 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/opts Oct 14 06:17:12 localhost dnsmasq[338732]: exiting on receipt of SIGTERM Oct 14 06:17:12 localhost podman[338910]: 2025-10-14 10:17:12.667385915 +0000 UTC m=+0.077889553 container kill 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:17:12 localhost systemd[1]: libpod-5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d.scope: Deactivated successfully. Oct 14 06:17:12 localhost podman[338933]: 2025-10-14 10:17:12.74124039 +0000 UTC m=+0.061878777 container died 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:12 localhost podman[338933]: 2025-10-14 10:17:12.778902772 +0000 UTC m=+0.099541109 container cleanup 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:17:12 localhost systemd[1]: libpod-conmon-5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d.scope: Deactivated successfully. Oct 14 06:17:12 localhost podman[338939]: 2025-10-14 10:17:12.837327806 +0000 UTC m=+0.146296623 container remove 5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:12 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:12.924 270389 INFO neutron.agent.dhcp.agent [None req-7560a0d0-dc39-4032-b1c4-ee94aa63032a - - - - - -] DHCP configuration for ports {'c16f61d9-5f2d-439f-bb78-66f1a126cd7a'} is completed#033[00m Oct 14 06:17:13 localhost nova_compute[295778]: 2025-10-14 10:17:13.192 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:13 localhost systemd[1]: var-lib-containers-storage-overlay-a71ca7d7eaf26cef2eb6f2ad12ca4c2e0f51223f4698686dae45c5e521e76151-merged.mount: Deactivated successfully. Oct 14 06:17:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5080f5bebf4a11c95c29f51cad8109d8709613a3aa5d2b94e259c5fd1f3af05d-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:13 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:13.254 2 INFO neutron.agent.securitygroups_rpc [None req-922b1e21-7d02-46c0-8ac9-6ab363b151e5 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e143 do_prune osdmap full prune enabled Oct 14 06:17:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 145 MiB data, 796 MiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 6.8 KiB/s wr, 132 op/s Oct 14 06:17:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e144 e144: 6 total, 6 up, 6 in Oct 14 06:17:13 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e144: 6 total, 6 up, 6 in Oct 14 06:17:13 localhost nova_compute[295778]: 2025-10-14 10:17:13.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:14.068 2 INFO neutron.agent.securitygroups_rpc [None req-994ba1d8-96d9-429a-b3ec-c8df771bdf05 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:14 localhost podman[339021]: Oct 14 06:17:14 localhost podman[339021]: 2025-10-14 10:17:14.242217282 +0000 UTC m=+0.097399753 container create 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:14 localhost systemd[1]: Started libpod-conmon-0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7.scope. Oct 14 06:17:14 localhost podman[339021]: 2025-10-14 10:17:14.197238645 +0000 UTC m=+0.052421166 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:14 localhost systemd[1]: Started libcrun container. Oct 14 06:17:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b8fe031b37604a6d20e7fe57a403b6fd42ce944f000717851e6bec64f155d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:14 localhost podman[339021]: 2025-10-14 10:17:14.310879848 +0000 UTC m=+0.166062299 container init 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009) Oct 14 06:17:14 localhost podman[339021]: 2025-10-14 10:17:14.32148089 +0000 UTC m=+0.176663341 container start 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:14 localhost dnsmasq[339039]: started, version 2.85 cachesize 150 Oct 14 06:17:14 localhost dnsmasq[339039]: DNS service limited to local subnets Oct 14 06:17:14 localhost dnsmasq[339039]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:14 localhost dnsmasq[339039]: warning: no upstream servers configured Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:14 localhost dnsmasq[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:14.364 270389 INFO neutron.agent.dhcp.agent [None req-523f5c92-9b56-411e-94b8-2bb4530de6af - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:12Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=34aee587-6412-4140-b0cf-fb48eaed8aef, ip_allocation=immediate, mac_address=fa:16:3e:97:fd:e0, name=tempest-NetworksTestDHCPv6-287123147, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=57, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['2102ae4b-94cb-4b8b-b5d5-8cecc695f6cd', '9dac65ea-887d-4b0a-8a68-9d0c592ec0ca'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:10Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2294, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:13Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:17:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e144 do_prune osdmap full prune enabled Oct 14 06:17:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e145 e145: 6 total, 6 up, 6 in Oct 14 06:17:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e145: 6 total, 6 up, 6 in Oct 14 06:17:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:14.572 270389 INFO neutron.agent.dhcp.agent [None req-7f628163-7776-4aed-97cb-5b633d246bfb - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:14 localhost dnsmasq[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:14 localhost podman[339058]: 2025-10-14 10:17:14.584271651 +0000 UTC m=+0.059138554 container kill 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:14.786 270389 INFO neutron.agent.dhcp.agent [None req-10876420-3cf9-4341-9247-722146fc10fa - - - - - -] DHCP configuration for ports {'34aee587-6412-4140-b0cf-fb48eaed8aef'} is completed#033[00m Oct 14 06:17:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e145 do_prune osdmap full prune enabled Oct 14 06:17:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e146 e146: 6 total, 6 up, 6 in Oct 14 06:17:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e146: 6 total, 6 up, 6 in Oct 14 06:17:14 localhost dnsmasq[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:14 localhost dnsmasq-dhcp[339039]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:14 localhost podman[339096]: 2025-10-14 10:17:14.919374116 +0000 UTC m=+0.069273013 container kill 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:17:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 145 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 161 KiB/s rd, 11 KiB/s wr, 223 op/s Oct 14 06:17:15 localhost podman[339118]: 2025-10-14 10:17:15.369793349 +0000 UTC m=+0.092452800 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251009) Oct 14 06:17:15 localhost podman[339118]: 2025-10-14 10:17:15.455001217 +0000 UTC m=+0.177660648 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Oct 14 06:17:15 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:17:15 localhost dnsmasq[339039]: exiting on receipt of SIGTERM Oct 14 06:17:15 localhost systemd[1]: tmp-crun.heGFGj.mount: Deactivated successfully. Oct 14 06:17:15 localhost podman[339148]: 2025-10-14 10:17:15.540926402 +0000 UTC m=+0.126277590 container kill 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:17:15 localhost systemd[1]: libpod-0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7.scope: Deactivated successfully. Oct 14 06:17:15 localhost nova_compute[295778]: 2025-10-14 10:17:15.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:15 localhost podman[339165]: 2025-10-14 10:17:15.61750786 +0000 UTC m=+0.051874822 container died 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:17:15 localhost podman[339165]: 2025-10-14 10:17:15.661041578 +0000 UTC m=+0.095408510 container remove 0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:15 localhost systemd[1]: libpod-conmon-0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7.scope: Deactivated successfully. Oct 14 06:17:16 localhost systemd[1]: var-lib-containers-storage-overlay-34b8fe031b37604a6d20e7fe57a403b6fd42ce944f000717851e6bec64f155d3-merged.mount: Deactivated successfully. Oct 14 06:17:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a74e8695e2661eb8e430d9bd8e31e43d4ebe857a28852249b12657944b80ce7-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e146 do_prune osdmap full prune enabled Oct 14 06:17:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e147 e147: 6 total, 6 up, 6 in Oct 14 06:17:16 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e147: 6 total, 6 up, 6 in Oct 14 06:17:16 localhost podman[339241]: Oct 14 06:17:16 localhost podman[339241]: 2025-10-14 10:17:16.554893957 +0000 UTC m=+0.096658152 container create 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:17:16 localhost nova_compute[295778]: 2025-10-14 10:17:16.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:16.570 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:16 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:16.571 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:17:16 localhost systemd[1]: Started libpod-conmon-17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5.scope. Oct 14 06:17:16 localhost podman[339241]: 2025-10-14 10:17:16.516395983 +0000 UTC m=+0.058160208 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:16 localhost systemd[1]: Started libcrun container. Oct 14 06:17:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93353192d7e0adf6dad87484f9b81a38accc00e09380c7afa567b8c7e096227/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:16 localhost podman[339241]: 2025-10-14 10:17:16.634034122 +0000 UTC m=+0.175798327 container init 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:17:16 localhost podman[339241]: 2025-10-14 10:17:16.645180969 +0000 UTC m=+0.186945164 container start 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:17:16 localhost dnsmasq[339260]: started, version 2.85 cachesize 150 Oct 14 06:17:16 localhost dnsmasq[339260]: DNS service limited to local subnets Oct 14 06:17:16 localhost dnsmasq[339260]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:16 localhost dnsmasq[339260]: warning: no upstream servers configured Oct 14 06:17:16 localhost dnsmasq-dhcp[339260]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:16 localhost dnsmasq[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:16 localhost dnsmasq-dhcp[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:16 localhost dnsmasq-dhcp[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:16 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:16.931 270389 INFO neutron.agent.dhcp.agent [None req-c19be27b-2996-4a02-b30d-b8181ef95363 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:17 localhost dnsmasq[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:17 localhost dnsmasq-dhcp[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:17 localhost dnsmasq-dhcp[339260]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:17 localhost podman[339278]: 2025-10-14 10:17:17.013901469 +0000 UTC m=+0.060439030 container kill 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:17 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:17.156 2 INFO neutron.agent.securitygroups_rpc [None req-2f4a518e-a759-4d76-98c3-9a8a58ba34a2 89ecba9e60ab4ed4b2a8f801d81075be 0a8ee99608b94600b463f14d4902f3b7 - - default default] Security group member updated ['1e825526-ca45-4d75-b345-f72249726766']#033[00m Oct 14 06:17:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 145 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.7 KiB/s wr, 86 op/s Oct 14 06:17:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:17.342 270389 INFO neutron.agent.dhcp.agent [None req-d488cbb9-34fe-459b-9385-97221a8ece26 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:17 localhost dnsmasq[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/addn_hosts - 0 addresses Oct 14 06:17:17 localhost podman[339317]: 2025-10-14 10:17:17.424914223 +0000 UTC m=+0.058240900 container kill 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:17 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/host Oct 14 06:17:17 localhost dnsmasq-dhcp[338869]: read /var/lib/neutron/dhcp/d9271a92-3911-4388-b25f-4ca78b313dd4/opts Oct 14 06:17:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e147 do_prune osdmap full prune enabled Oct 14 06:17:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e148 e148: 6 total, 6 up, 6 in Oct 14 06:17:17 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:17.494 2 INFO neutron.agent.securitygroups_rpc [None req-350544ed-e804-411b-b4c0-7c1dc91acd03 23c87f3e6fcf4e92b503a3545c69b885 bc139a195b1a4766b00c4bbfdffdb9e3 - - default default] Security group member updated ['a20ff476-7b51-48c6-a80f-bb88f6adeae7']#033[00m Oct 14 06:17:17 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e148: 6 total, 6 up, 6 in Oct 14 06:17:17 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:17.545 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:18 localhost dnsmasq[338869]: exiting on receipt of SIGTERM Oct 14 06:17:18 localhost podman[339356]: 2025-10-14 10:17:18.064824947 +0000 UTC m=+0.068892884 container kill 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:18 localhost systemd[1]: libpod-1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23.scope: Deactivated successfully. Oct 14 06:17:18 localhost podman[339373]: 2025-10-14 10:17:18.14500436 +0000 UTC m=+0.056867984 container died 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:18 localhost podman[339373]: 2025-10-14 10:17:18.191062735 +0000 UTC m=+0.102926309 container remove 1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9271a92-3911-4388-b25f-4ca78b313dd4, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:18 localhost ovn_controller[156286]: 2025-10-14T10:17:18Z|00344|binding|INFO|Removing iface tap45c86c3a-a4 ovn-installed in OVS Oct 14 06:17:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:18.198 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f0482aaf-e436-41f2-b3da-97a8cec07beb with type ""#033[00m Oct 14 06:17:18 localhost ovn_controller[156286]: 2025-10-14T10:17:18Z|00345|binding|INFO|Removing lport 45c86c3a-a4bc-4993-ab94-9ed7f45b600e ovn-installed in OVS Oct 14 06:17:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:18.200 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-d9271a92-3911-4388-b25f-4ca78b313dd4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9271a92-3911-4388-b25f-4ca78b313dd4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a8ee99608b94600b463f14d4902f3b7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93235c55-787b-4f31-b9d2-c7db8ed46611, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=45c86c3a-a4bc-4993-ab94-9ed7f45b600e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:18.203 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 45c86c3a-a4bc-4993-ab94-9ed7f45b600e in datapath d9271a92-3911-4388-b25f-4ca78b313dd4 unbound from our chassis#033[00m Oct 14 06:17:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:18.208 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9271a92-3911-4388-b25f-4ca78b313dd4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:18.209 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5f99ef-affb-4ea8-85ac-8c6dc421e58f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:18 localhost systemd[1]: libpod-conmon-1c18579ef1afde756a8a017bd675ef1c6d92c9c49498b15484ed41a068869d23.scope: Deactivated successfully. Oct 14 06:17:18 localhost nova_compute[295778]: 2025-10-14 10:17:18.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:18 localhost kernel: device tap45c86c3a-a4 left promiscuous mode Oct 14 06:17:18 localhost systemd[1]: var-lib-containers-storage-overlay-fb440e88c49f4ab48bdbfd706e54e00769191314025f624e8f87dafb02204af9-merged.mount: Deactivated successfully. Oct 14 06:17:18 localhost nova_compute[295778]: 2025-10-14 10:17:18.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:18 localhost systemd[1]: run-netns-qdhcp\x2dd9271a92\x2d3911\x2d4388\x2db25f\x2d4ca78b313dd4.mount: Deactivated successfully. Oct 14 06:17:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:18.279 270389 INFO neutron.agent.dhcp.agent [None req-e2d0cfc2-c731-4262-b20e-10f396511a5d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:18 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:18.397 2 INFO neutron.agent.securitygroups_rpc [None req-781d2756-8989-48c7-8197-f77dae3ec589 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:18.401 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:18 localhost nova_compute[295778]: 2025-10-14 10:17:18.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:18 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:18.551 2 INFO neutron.agent.securitygroups_rpc [req-53d0fc0c-66d1-42d3-89fc-c2ce28d642ac req-d20a620d-b8b9-4f7d-adb0-dd2566f4346e 4c194ea59b244432a9ec5417b8101ebe 5ac8b4aa702a449b8bf4a8039f977fc5 - - default default] Security group member updated ['8fe43e8a-a14a-430f-ba7d-c6a0fef96a1b']#033[00m Oct 14 06:17:18 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:18.572 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:18 localhost dnsmasq[339260]: exiting on receipt of SIGTERM Oct 14 06:17:18 localhost podman[339414]: 2025-10-14 10:17:18.589448653 +0000 UTC m=+0.067550128 container kill 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:18 localhost systemd[1]: libpod-17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5.scope: Deactivated successfully. Oct 14 06:17:18 localhost podman[339429]: 2025-10-14 10:17:18.664283535 +0000 UTC m=+0.057125881 container died 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:18 localhost podman[339429]: 2025-10-14 10:17:18.704865655 +0000 UTC m=+0.097707961 container cleanup 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:17:18 localhost systemd[1]: libpod-conmon-17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5.scope: Deactivated successfully. Oct 14 06:17:18 localhost nova_compute[295778]: 2025-10-14 10:17:18.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:18 localhost podman[339430]: 2025-10-14 10:17:18.753353554 +0000 UTC m=+0.137720264 container remove 17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0) Oct 14 06:17:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:19.191 2 INFO neutron.agent.securitygroups_rpc [None req-b7e6bbed-7f3f-4a3c-80c3-50a29e36be94 23c87f3e6fcf4e92b503a3545c69b885 bc139a195b1a4766b00c4bbfdffdb9e3 - - default default] Security group member updated ['a20ff476-7b51-48c6-a80f-bb88f6adeae7']#033[00m Oct 14 06:17:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:19.217 2 INFO neutron.agent.securitygroups_rpc [None req-130ebb04-cb2a-433a-974b-eb9d9513b4eb 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:19 localhost systemd[1]: var-lib-containers-storage-overlay-e93353192d7e0adf6dad87484f9b81a38accc00e09380c7afa567b8c7e096227-merged.mount: Deactivated successfully. Oct 14 06:17:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17de571a643ad2ef2d698fa0be4d2d64d459e7845cb0a4505c75a8b2468faae5-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:19.272 2 INFO neutron.agent.securitygroups_rpc [None req-ed13e3b4-7a5d-4979-8f87-6fc73b0d5f68 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 145 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 3.1 KiB/s wr, 71 op/s Oct 14 06:17:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:19.988 2 INFO neutron.agent.securitygroups_rpc [None req-2ec4d1c4-e5f7-4c7a-9600-8dd539f820f9 73c3910059834cd0998d3459c50cd69d 82fc7afce38344ffb7eda3bb0fabdb5b - - default default] Security group member updated ['10f25aec-a6f2-40dd-837d-8812e1c0ebb8']#033[00m Oct 14 06:17:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:20.058 2 INFO neutron.agent.securitygroups_rpc [None req-13b9aea2-710f-4d66-97e3-a7afd3b9cd17 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:20 localhost podman[339505]: Oct 14 06:17:20 localhost podman[339505]: 2025-10-14 10:17:20.311444275 +0000 UTC m=+0.095127042 container create e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:20 localhost systemd[1]: Started libpod-conmon-e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60.scope. Oct 14 06:17:20 localhost podman[339505]: 2025-10-14 10:17:20.263408857 +0000 UTC m=+0.047091614 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:20 localhost systemd[1]: Started libcrun container. Oct 14 06:17:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf5f6a33c0df93e301d8c7083aa21c53382f493ac0be013374b99670dd1f529/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:20 localhost podman[339505]: 2025-10-14 10:17:20.392268935 +0000 UTC m=+0.175951682 container init e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:20 localhost podman[339505]: 2025-10-14 10:17:20.402612041 +0000 UTC m=+0.186294788 container start e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:20 localhost dnsmasq[339523]: started, version 2.85 cachesize 150 Oct 14 06:17:20 localhost dnsmasq[339523]: DNS service limited to local subnets Oct 14 06:17:20 localhost dnsmasq[339523]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:20 localhost dnsmasq[339523]: warning: no upstream servers configured Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 14 06:17:20 localhost dnsmasq[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:20.464 270389 INFO neutron.agent.dhcp.agent [None req-001edd9d-6a3f-4399-beff-235cfe6d09de - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:18Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=0374b495-1774-4577-aa61-1168456d498a, ip_allocation=immediate, mac_address=fa:16:3e:c0:72:41, name=tempest-NetworksTestDHCPv6-2007528121, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:14:52Z, description=, dns_domain=, id=74049e43-4aa7-4318-9233-a58980c3495b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-670469551, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16402, qos_policy_id=None, revision_number=61, router:external=False, shared=False, standard_attr_id=1568, status=ACTIVE, subnets=['1fd267cd-53de-4d35-b42b-29d5b8ebfacd', '747d764c-80ab-4dbd-8c44-07ad9cb7b6e2'], tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:16Z, vlan_transparent=None, network_id=74049e43-4aa7-4318-9233-a58980c3495b, port_security_enabled=True, project_id=82fc7afce38344ffb7eda3bb0fabdb5b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10f25aec-a6f2-40dd-837d-8812e1c0ebb8'], standard_attr_id=2331, status=DOWN, tags=[], tenant_id=82fc7afce38344ffb7eda3bb0fabdb5b, updated_at=2025-10-14T10:17:18Z on network 74049e43-4aa7-4318-9233-a58980c3495b#033[00m Oct 14 06:17:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:20.589 270389 INFO neutron.agent.dhcp.agent [None req-7bb6c4c2-d3af-448f-b487-69ca30323d53 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:20 localhost nova_compute[295778]: 2025-10-14 10:17:20.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:20 localhost dnsmasq[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 2 addresses Oct 14 06:17:20 localhost podman[339542]: 2025-10-14 10:17:20.632532127 +0000 UTC m=+0.055313112 container kill e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:20 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:20.809 270389 INFO neutron.agent.dhcp.agent [None req-08f6083d-098c-402f-963a-aca0fe863a65 - - - - - -] DHCP configuration for ports {'0374b495-1774-4577-aa61-1168456d498a'} is completed#033[00m Oct 14 06:17:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e148 do_prune osdmap full prune enabled Oct 14 06:17:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e149 e149: 6 total, 6 up, 6 in Oct 14 06:17:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e149: 6 total, 6 up, 6 in Oct 14 06:17:20 localhost dnsmasq[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:20 localhost podman[339579]: 2025-10-14 10:17:20.961332744 +0000 UTC m=+0.061396024 container kill e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:20 localhost dnsmasq-dhcp[339523]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:21 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:21.165 2 INFO neutron.agent.securitygroups_rpc [None req-761d8405-b4ba-472e-86ed-7284e6cf7df2 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:17:21 localhost podman[339601]: 2025-10-14 10:17:21.306565239 +0000 UTC m=+0.090063637 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 192 MiB data, 879 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 139 op/s Oct 14 06:17:21 localhost systemd[1]: tmp-crun.xXbm6X.mount: Deactivated successfully. Oct 14 06:17:21 localhost podman[339602]: 2025-10-14 10:17:21.358690916 +0000 UTC m=+0.139755059 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:17:21 localhost podman[339602]: 2025-10-14 10:17:21.369443342 +0000 UTC m=+0.150507455 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:17:21 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:17:21 localhost podman[339601]: 2025-10-14 10:17:21.386988648 +0000 UTC m=+0.170487046 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Oct 14 06:17:21 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:17:21 localhost podman[339659]: 2025-10-14 10:17:21.507615177 +0000 UTC m=+0.060169841 container kill e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:17:21 localhost dnsmasq[339523]: exiting on receipt of SIGTERM Oct 14 06:17:21 localhost systemd[1]: libpod-e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60.scope: Deactivated successfully. Oct 14 06:17:21 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:21.574 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:17:21 localhost podman[339673]: 2025-10-14 10:17:21.580559798 +0000 UTC m=+0.060251103 container died e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:21 localhost podman[339673]: 2025-10-14 10:17:21.612002464 +0000 UTC m=+0.091693719 container cleanup e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:21 localhost systemd[1]: libpod-conmon-e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60.scope: Deactivated successfully. Oct 14 06:17:21 localhost podman[339675]: 2025-10-14 10:17:21.662264632 +0000 UTC m=+0.134083839 container remove e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:17:22 localhost systemd[1]: var-lib-containers-storage-overlay-3cf5f6a33c0df93e301d8c7083aa21c53382f493ac0be013374b99670dd1f529-merged.mount: Deactivated successfully. Oct 14 06:17:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0a4229456506c5caf2a9503e6b1f546367a540bc8e186a1c06ce38072905d60-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:17:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3106445596' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:17:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:17:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3106445596' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:17:22 localhost podman[339753]: Oct 14 06:17:22 localhost podman[339753]: 2025-10-14 10:17:22.506184014 +0000 UTC m=+0.097316991 container create cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:22 localhost systemd[1]: Started libpod-conmon-cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a.scope. Oct 14 06:17:22 localhost podman[339753]: 2025-10-14 10:17:22.458552096 +0000 UTC m=+0.049685103 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:22 localhost systemd[1]: Started libcrun container. Oct 14 06:17:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb03e4e87ccf79181177b6a60acfca4349d14520368cd8748e4a95e710015a97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:22 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:22.572 2 INFO neutron.agent.securitygroups_rpc [None req-3b1fd47a-b385-45e6-a18b-613582c496ea 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:22 localhost podman[339753]: 2025-10-14 10:17:22.576456663 +0000 UTC m=+0.167589640 container init cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:17:22 localhost podman[339753]: 2025-10-14 10:17:22.585517234 +0000 UTC m=+0.176650211 container start cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:22 localhost dnsmasq[339772]: started, version 2.85 cachesize 150 Oct 14 06:17:22 localhost dnsmasq[339772]: DNS service limited to local subnets Oct 14 06:17:22 localhost dnsmasq[339772]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:22 localhost dnsmasq[339772]: warning: no upstream servers configured Oct 14 06:17:22 localhost dnsmasq-dhcp[339772]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 14 06:17:22 localhost dnsmasq[339772]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/addn_hosts - 0 addresses Oct 14 06:17:22 localhost dnsmasq-dhcp[339772]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/host Oct 14 06:17:22 localhost dnsmasq-dhcp[339772]: read /var/lib/neutron/dhcp/74049e43-4aa7-4318-9233-a58980c3495b/opts Oct 14 06:17:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:22.874 270389 INFO neutron.agent.dhcp.agent [None req-e27006ed-bc18-419b-b04e-1f458ef615f9 - - - - - -] DHCP configuration for ports {'bb90059a-750e-43da-ba16-03b3dce8c155', '72b8f6e4-5ba3-438e-afab-2731847eecef'} is completed#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.921 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.922 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:17:22 localhost nova_compute[295778]: 2025-10-14 10:17:22.923 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:17:22 localhost dnsmasq[339772]: exiting on receipt of SIGTERM Oct 14 06:17:22 localhost podman[339789]: 2025-10-14 10:17:22.945017068 +0000 UTC m=+0.063045208 container kill cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:17:22 localhost systemd[1]: libpod-cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a.scope: Deactivated successfully. Oct 14 06:17:23 localhost podman[339802]: 2025-10-14 10:17:23.015666397 +0000 UTC m=+0.058225599 container died cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:17:23 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:23.031 2 INFO neutron.agent.securitygroups_rpc [None req-25897af1-4107-422e-ad6d-8107d81487b2 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:23 localhost podman[339802]: 2025-10-14 10:17:23.046011714 +0000 UTC m=+0.088570846 container cleanup cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:17:23 localhost systemd[1]: libpod-conmon-cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a.scope: Deactivated successfully. Oct 14 06:17:23 localhost podman[339804]: 2025-10-14 10:17:23.074884233 +0000 UTC m=+0.108327713 container remove cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74049e43-4aa7-4318-9233-a58980c3495b, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:17:23 localhost ovn_controller[156286]: 2025-10-14T10:17:23Z|00346|binding|INFO|Releasing lport 72b8f6e4-5ba3-438e-afab-2731847eecef from this chassis (sb_readonly=0) Oct 14 06:17:23 localhost kernel: device tap72b8f6e4-5b left promiscuous mode Oct 14 06:17:23 localhost ovn_controller[156286]: 2025-10-14T10:17:23Z|00347|binding|INFO|Setting lport 72b8f6e4-5ba3-438e-afab-2731847eecef down in Southbound Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:23.136 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe89:980b/64 2001:db8::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74049e43-4aa7-4318-9233-a58980c3495b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc7afce38344ffb7eda3bb0fabdb5b', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1f1366-6307-4914-922e-2b4f9757811b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=72b8f6e4-5ba3-438e-afab-2731847eecef) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:23.137 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 72b8f6e4-5ba3-438e-afab-2731847eecef in datapath 74049e43-4aa7-4318-9233-a58980c3495b unbound from our chassis#033[00m Oct 14 06:17:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:23.139 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74049e43-4aa7-4318-9233-a58980c3495b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:23.140 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[3fef4eed-087f-4224-a43e-e3a9ad1b361a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 192 MiB data, 879 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 122 op/s Oct 14 06:17:23 localhost systemd[1]: var-lib-containers-storage-overlay-fb03e4e87ccf79181177b6a60acfca4349d14520368cd8748e4a95e710015a97-merged.mount: Deactivated successfully. Oct 14 06:17:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf4bab8d3c02f521dfd281b424ec4b66fbb19105f3cf51ef6b468c402689050a-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:17:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/280978075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.502 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:17:23 localhost systemd[1]: run-netns-qdhcp\x2d74049e43\x2d4aa7\x2d4318\x2d9233\x2da58980c3495b.mount: Deactivated successfully. Oct 14 06:17:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:23.542 270389 INFO neutron.agent.dhcp.agent [None req-bc15874c-971e-4109-901d-bce1e826f7d1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.743 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.745 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11435MB free_disk=41.77470016479492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.746 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:17:23 localhost nova_compute[295778]: 2025-10-14 10:17:23.747 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.056 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.057 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.308 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:17:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:24.621 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:24 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:24.638 2 INFO neutron.agent.securitygroups_rpc [None req-2ea822f2-eb65-4bfd-b6c2-8220108a0ad2 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:17:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3359750243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.728 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.735 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.754 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.803 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:17:24 localhost nova_compute[295778]: 2025-10-14 10:17:24.804 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:17:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e149 do_prune osdmap full prune enabled Oct 14 06:17:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e150 e150: 6 total, 6 up, 6 in Oct 14 06:17:24 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e150: 6 total, 6 up, 6 in Oct 14 06:17:25 localhost nova_compute[295778]: 2025-10-14 10:17:25.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:25.128 2 INFO neutron.agent.securitygroups_rpc [None req-3095d98a-70fd-4677-894b-50b46d66e5f2 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 199 op/s Oct 14 06:17:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:25.575 2 INFO neutron.agent.securitygroups_rpc [None req-fd2f30ab-7af5-4689-be38-54c72b0865c4 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:25 localhost nova_compute[295778]: 2025-10-14 10:17:25.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:17:26 localhost podman[339893]: 2025-10-14 10:17:26.286966195 +0000 UTC m=+0.103206146 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:17:26 localhost podman[339894]: 2025-10-14 10:17:26.265968197 +0000 UTC m=+0.083613225 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:17:26 localhost podman[339893]: 2025-10-14 10:17:26.327098603 +0000 UTC m=+0.143338514 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid) Oct 14 06:17:26 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:17:26 localhost podman[339894]: 2025-10-14 10:17:26.350229039 +0000 UTC m=+0.167874087 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:26 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:17:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:27.043 2 INFO neutron.agent.securitygroups_rpc [None req-d9ed0561-18b0-464c-bcbd-f3234b4677b2 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 194 op/s Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:17:27 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:17:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:17:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:17:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:17:28 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:28 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 9e815c4f-95ea-42f0-9575-9d897bc0da55 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:17:28 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 9e815c4f-95ea-42f0-9575-9d897bc0da55 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:17:28 localhost ceph-mgr[300442]: [progress INFO root] Completed event 9e815c4f-95ea-42f0-9575-9d897bc0da55 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:17:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:17:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:17:28 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:28.055 2 INFO neutron.agent.securitygroups_rpc [None req-621b59c3-21d0-4a62-9715-c2a3ba627dfc 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:28 localhost nova_compute[295778]: 2025-10-14 10:17:28.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:28 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:28.576 2 INFO neutron.agent.securitygroups_rpc [None req-2fe7dd40-e5e5-426d-9c90-91a47b056eee 10b55ef66b7942fbb887281b08c1c2c4 64a4f7cc952f4010aeadd1288d8b2d40 - - default default] Security group member updated ['82f65abf-851e-40c1-af7d-0dc1d45ee116']#033[00m Oct 14 06:17:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:17:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e150 do_prune osdmap full prune enabled Oct 14 06:17:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e151 e151: 6 total, 6 up, 6 in Oct 14 06:17:29 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e151: 6 total, 6 up, 6 in Oct 14 06:17:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 24 KiB/s wr, 89 op/s Oct 14 06:17:29 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:17:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:17:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:17:30 localhost podman[246584]: time="2025-10-14T10:17:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:17:30 localhost podman[246584]: @ - - [14/Oct/2025:10:17:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:17:30 localhost podman[246584]: @ - - [14/Oct/2025:10:17:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18890 "" "Go-http-client/1.1" Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.804 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.805 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.805 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.922 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:17:30 localhost nova_compute[295778]: 2025-10-14 10:17:30.923 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 219 op/s Oct 14 06:17:32 localhost nova_compute[295778]: 2025-10-14 10:17:32.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:32 localhost nova_compute[295778]: 2025-10-14 10:17:32.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 206 op/s Oct 14 06:17:33 localhost openstack_network_exporter[248748]: ERROR 10:17:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:17:33 localhost openstack_network_exporter[248748]: ERROR 10:17:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:17:33 localhost openstack_network_exporter[248748]: ERROR 10:17:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:17:33 localhost openstack_network_exporter[248748]: ERROR 10:17:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:17:33 localhost openstack_network_exporter[248748]: Oct 14 06:17:33 localhost openstack_network_exporter[248748]: ERROR 10:17:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:17:33 localhost openstack_network_exporter[248748]: Oct 14 06:17:33 localhost nova_compute[295778]: 2025-10-14 10:17:33.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:17:34 localhost podman[340053]: 2025-10-14 10:17:34.556592218 +0000 UTC m=+0.094636509 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:17:34 localhost podman[340053]: 2025-10-14 10:17:34.597925738 +0000 UTC m=+0.135970069 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, vendor=Red Hat, Inc.) Oct 14 06:17:34 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:17:34 localhost podman[340055]: 2025-10-14 10:17:34.598743309 +0000 UTC m=+0.127427960 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:17:34 localhost podman[340054]: 2025-10-14 10:17:34.654618146 +0000 UTC m=+0.187286094 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller) Oct 14 06:17:34 localhost podman[340055]: 2025-10-14 10:17:34.683279068 +0000 UTC m=+0.211963729 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:17:34 localhost podman[340054]: 2025-10-14 10:17:34.69009194 +0000 UTC m=+0.222759848 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:17:34 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:17:34 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:17:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e151 do_prune osdmap full prune enabled Oct 14 06:17:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e152 e152: 6 total, 6 up, 6 in Oct 14 06:17:34 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e152: 6 total, 6 up, 6 in Oct 14 06:17:34 localhost nova_compute[295778]: 2025-10-14 10:17:34.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 2.9 KiB/s wr, 177 op/s Oct 14 06:17:35 localhost nova_compute[295778]: 2025-10-14 10:17:35.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e152 do_prune osdmap full prune enabled Oct 14 06:17:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e153 e153: 6 total, 6 up, 6 in Oct 14 06:17:35 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e153: 6 total, 6 up, 6 in Oct 14 06:17:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:36.961 270389 INFO neutron.agent.linux.ip_lib [None req-d63c6a21-8f17-4882-ae42-400fedf5085d - - - - - -] Device tap543b4e58-15 cannot be used as it has no MAC address#033[00m Oct 14 06:17:36 localhost nova_compute[295778]: 2025-10-14 10:17:36.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:36 localhost kernel: device tap543b4e58-15 entered promiscuous mode Oct 14 06:17:36 localhost NetworkManager[5972]: [1760437056.9953] manager: (tap543b4e58-15): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Oct 14 06:17:36 localhost ovn_controller[156286]: 2025-10-14T10:17:36Z|00348|binding|INFO|Claiming lport 543b4e58-156e-4e40-a742-215ef149dc8e for this chassis. Oct 14 06:17:36 localhost ovn_controller[156286]: 2025-10-14T10:17:36Z|00349|binding|INFO|543b4e58-156e-4e40-a742-215ef149dc8e: Claiming unknown Oct 14 06:17:36 localhost nova_compute[295778]: 2025-10-14 10:17:36.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:37 localhost systemd-udevd[340131]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:37.013 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=415d8148-4778-4ac7-aaec-0b8a35ceda16, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=543b4e58-156e-4e40-a742-215ef149dc8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:37.016 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 543b4e58-156e-4e40-a742-215ef149dc8e in datapath 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb bound to our chassis#033[00m Oct 14 06:17:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:37.018 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:17:37 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:37.021 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[9581b6a9-5c0e-451e-8cb3-a334ce7ab5fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost nova_compute[295778]: 2025-10-14 10:17:37.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:37 localhost ovn_controller[156286]: 2025-10-14T10:17:37Z|00350|binding|INFO|Setting lport 543b4e58-156e-4e40-a742-215ef149dc8e ovn-installed in OVS Oct 14 06:17:37 localhost ovn_controller[156286]: 2025-10-14T10:17:37Z|00351|binding|INFO|Setting lport 543b4e58-156e-4e40-a742-215ef149dc8e up in Southbound Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost nova_compute[295778]: 2025-10-14 10:17:37.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost journal[236030]: ethtool ioctl error on tap543b4e58-15: No such device Oct 14 06:17:37 localhost nova_compute[295778]: 2025-10-14 10:17:37.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:37 localhost nova_compute[295778]: 2025-10-14 10:17:37.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e153 do_prune osdmap full prune enabled Oct 14 06:17:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 2.9 KiB/s wr, 177 op/s Oct 14 06:17:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e154 e154: 6 total, 6 up, 6 in Oct 14 06:17:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e154: 6 total, 6 up, 6 in Oct 14 06:17:37 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:37.682 2 INFO neutron.agent.securitygroups_rpc [None req-c53ba612-2959-49b9-907f-50100eb8726b 2e7cd4bda92349ddb9cbf7425b92390f d9d0afbea79e447cb971eaabb8beabe0 - - default default] Security group member updated ['1c1b1ebb-7217-404a-a5ad-52e80abb7fe1']#033[00m Oct 14 06:17:37 localhost nova_compute[295778]: 2025-10-14 10:17:37.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:17:38 localhost podman[340202]: Oct 14 06:17:38 localhost podman[340202]: 2025-10-14 10:17:38.097687144 +0000 UTC m=+0.086588865 container create 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:38 localhost systemd[1]: Started libpod-conmon-0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73.scope. Oct 14 06:17:38 localhost podman[340202]: 2025-10-14 10:17:38.049592135 +0000 UTC m=+0.038493926 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:38 localhost systemd[1]: Started libcrun container. Oct 14 06:17:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c03d68b31eaba4537e7d7443c8482e60c70f04822b163d1b980be13a23d9da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:38 localhost podman[340202]: 2025-10-14 10:17:38.196326578 +0000 UTC m=+0.185228299 container init 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:38 localhost podman[340202]: 2025-10-14 10:17:38.20878116 +0000 UTC m=+0.197682891 container start 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:38 localhost dnsmasq[340221]: started, version 2.85 cachesize 150 Oct 14 06:17:38 localhost dnsmasq[340221]: DNS service limited to local subnets Oct 14 06:17:38 localhost dnsmasq[340221]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:38 localhost dnsmasq[340221]: warning: no upstream servers configured Oct 14 06:17:38 localhost dnsmasq-dhcp[340221]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Oct 14 06:17:38 localhost dnsmasq[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/addn_hosts - 0 addresses Oct 14 06:17:38 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/host Oct 14 06:17:38 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/opts Oct 14 06:17:38 localhost nova_compute[295778]: 2025-10-14 10:17:38.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:38.438 270389 INFO neutron.agent.dhcp.agent [None req-70e9055b-69de-4908-aae4-b5301882558d - - - - - -] DHCP configuration for ports {'5206b495-9c00-493d-b74f-39b5a8698f6e'} is completed#033[00m Oct 14 06:17:38 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:38.496 2 INFO neutron.agent.securitygroups_rpc [None req-35368cb8-158c-4436-9630-caf5139e6dbf 2e7cd4bda92349ddb9cbf7425b92390f d9d0afbea79e447cb971eaabb8beabe0 - - default default] Security group member updated ['1c1b1ebb-7217-404a-a5ad-52e80abb7fe1']#033[00m Oct 14 06:17:38 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:38.956 2 INFO neutron.agent.securitygroups_rpc [None req-365d8d66-363a-4a0b-a787-22dccf61b748 2e7cd4bda92349ddb9cbf7425b92390f d9d0afbea79e447cb971eaabb8beabe0 - - default default] Security group member updated ['1c1b1ebb-7217-404a-a5ad-52e80abb7fe1']#033[00m Oct 14 06:17:39 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:39.116 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:17:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:17:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 192 MiB data, 880 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 2.3 KiB/s wr, 64 op/s Oct 14 06:17:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e154 do_prune osdmap full prune enabled Oct 14 06:17:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e155 e155: 6 total, 6 up, 6 in Oct 14 06:17:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e155: 6 total, 6 up, 6 in Oct 14 06:17:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:39 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:39.842 2 INFO neutron.agent.securitygroups_rpc [None req-25b80d9e-e1da-4936-8be0-6b0a4ba835b7 2e7cd4bda92349ddb9cbf7425b92390f d9d0afbea79e447cb971eaabb8beabe0 - - default default] Security group member updated ['1c1b1ebb-7217-404a-a5ad-52e80abb7fe1']#033[00m Oct 14 06:17:40 localhost nova_compute[295778]: 2025-10-14 10:17:40.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:41.235 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:40Z, description=, device_id=b1907107-56b1-4d67-8fb6-57e758752500, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d79da04-f486-4d51-b675-63b4c98460bf, ip_allocation=immediate, mac_address=fa:16:3e:5c:4c:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:17:34Z, description=, dns_domain=, id=4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-714005897, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60655, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2400, status=ACTIVE, subnets=['ac15aed1-4dbc-4560-ba8d-8198e75c8828'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:17:35Z, vlan_transparent=None, network_id=4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2444, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:17:40Z on network 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb#033[00m Oct 14 06:17:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 713 KiB/s rd, 7.8 MiB/s wr, 274 op/s Oct 14 06:17:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e155 do_prune osdmap full prune enabled Oct 14 06:17:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e156 e156: 6 total, 6 up, 6 in Oct 14 06:17:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e156: 6 total, 6 up, 6 in Oct 14 06:17:41 localhost dnsmasq[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/addn_hosts - 1 addresses Oct 14 06:17:41 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/host Oct 14 06:17:41 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/opts Oct 14 06:17:41 localhost podman[340239]: 2025-10-14 10:17:41.445927029 +0000 UTC m=+0.071625306 container kill 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:17:41 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:41.709 270389 INFO neutron.agent.dhcp.agent [None req-689420d7-e0bb-4c87-999d-a3b2dac169b3 - - - - - -] DHCP configuration for ports {'3d79da04-f486-4d51-b675-63b4c98460bf'} is completed#033[00m Oct 14 06:17:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 713 KiB/s rd, 7.8 MiB/s wr, 274 op/s Oct 14 06:17:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:43.357 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:17:40Z, description=, device_id=b1907107-56b1-4d67-8fb6-57e758752500, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d79da04-f486-4d51-b675-63b4c98460bf, ip_allocation=immediate, mac_address=fa:16:3e:5c:4c:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:17:34Z, description=, dns_domain=, id=4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-714005897, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60655, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2400, status=ACTIVE, subnets=['ac15aed1-4dbc-4560-ba8d-8198e75c8828'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:17:35Z, vlan_transparent=None, network_id=4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2444, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:17:40Z on network 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb#033[00m Oct 14 06:17:43 localhost nova_compute[295778]: 2025-10-14 10:17:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:43 localhost dnsmasq[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/addn_hosts - 1 addresses Oct 14 06:17:43 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/host Oct 14 06:17:43 localhost podman[340277]: 2025-10-14 10:17:43.551351022 +0000 UTC m=+0.059119674 container kill 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS) Oct 14 06:17:43 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/opts Oct 14 06:17:43 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:43.768 270389 INFO neutron.agent.dhcp.agent [None req-0178cb9b-cbde-4863-8970-bc1ca8c5c5e9 - - - - - -] DHCP configuration for ports {'3d79da04-f486-4d51-b675-63b4c98460bf'} is completed#033[00m Oct 14 06:17:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:44.542 2 INFO neutron.agent.securitygroups_rpc [None req-e35969f1-ada6-4389-8aa2-3aee82b24fd0 4abcf2207306448e9582b15f96b7ebff 3ea6a4a53034479f90ec8161c8b6ce29 - - default default] Security group member updated ['f8556b9e-ea71-4aa8-9e6b-de955a348819']#033[00m Oct 14 06:17:44 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:44.793 2 INFO neutron.agent.securitygroups_rpc [None req-e35969f1-ada6-4389-8aa2-3aee82b24fd0 4abcf2207306448e9582b15f96b7ebff 3ea6a4a53034479f90ec8161c8b6ce29 - - default default] Security group member updated ['f8556b9e-ea71-4aa8-9e6b-de955a348819']#033[00m Oct 14 06:17:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e156 do_prune osdmap full prune enabled Oct 14 06:17:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e157 e157: 6 total, 6 up, 6 in Oct 14 06:17:44 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e157: 6 total, 6 up, 6 in Oct 14 06:17:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 780 KiB/s rd, 7.8 MiB/s wr, 360 op/s Oct 14 06:17:45 localhost dnsmasq[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/addn_hosts - 0 addresses Oct 14 06:17:45 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/host Oct 14 06:17:45 localhost podman[340315]: 2025-10-14 10:17:45.695796131 +0000 UTC m=+0.055998801 container kill 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:17:45 localhost dnsmasq-dhcp[340221]: read /var/lib/neutron/dhcp/4b6f4995-d785-4c72-9bf7-d69a17bdd5eb/opts Oct 14 06:17:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:17:45 localhost nova_compute[295778]: 2025-10-14 10:17:45.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:45 localhost systemd[1]: tmp-crun.WwxBUV.mount: Deactivated successfully. Oct 14 06:17:45 localhost podman[340328]: 2025-10-14 10:17:45.814801838 +0000 UTC m=+0.094728542 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:45 localhost podman[340328]: 2025-10-14 10:17:45.829261282 +0000 UTC m=+0.109188016 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm) Oct 14 06:17:45 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:17:46 localhost kernel: device tap543b4e58-15 left promiscuous mode Oct 14 06:17:46 localhost nova_compute[295778]: 2025-10-14 10:17:46.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:46 localhost ovn_controller[156286]: 2025-10-14T10:17:46Z|00352|binding|INFO|Releasing lport 543b4e58-156e-4e40-a742-215ef149dc8e from this chassis (sb_readonly=0) Oct 14 06:17:46 localhost ovn_controller[156286]: 2025-10-14T10:17:46Z|00353|binding|INFO|Setting lport 543b4e58-156e-4e40-a742-215ef149dc8e down in Southbound Oct 14 06:17:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:46.105 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=415d8148-4778-4ac7-aaec-0b8a35ceda16, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=543b4e58-156e-4e40-a742-215ef149dc8e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:46.107 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 543b4e58-156e-4e40-a742-215ef149dc8e in datapath 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb unbound from our chassis#033[00m Oct 14 06:17:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:46.109 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4b6f4995-d785-4c72-9bf7-d69a17bdd5eb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:17:46 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:46.111 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e62959-ec33-41e5-9bfa-0cc55c19b272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:46 localhost nova_compute[295778]: 2025-10-14 10:17:46.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:46.495 2 INFO neutron.agent.securitygroups_rpc [None req-4edfd190-8677-4ed2-97b2-53949f840179 4abcf2207306448e9582b15f96b7ebff 3ea6a4a53034479f90ec8161c8b6ce29 - - default default] Security group member updated ['f8556b9e-ea71-4aa8-9e6b-de955a348819']#033[00m Oct 14 06:17:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:46.879 2 INFO neutron.agent.securitygroups_rpc [None req-75011e6b-f866-4f68-81e1-8496b330b115 4abcf2207306448e9582b15f96b7ebff 3ea6a4a53034479f90ec8161c8b6ce29 - - default default] Security group member updated ['f8556b9e-ea71-4aa8-9e6b-de955a348819']#033[00m Oct 14 06:17:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e157 do_prune osdmap full prune enabled Oct 14 06:17:46 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:46.899 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e158 e158: 6 total, 6 up, 6 in Oct 14 06:17:46 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e158: 6 total, 6 up, 6 in Oct 14 06:17:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 28 KiB/s wr, 86 op/s Oct 14 06:17:48 localhost nova_compute[295778]: 2025-10-14 10:17:48.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:17:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2418449549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:17:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:17:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2418449549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:17:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:49.291 270389 INFO neutron.agent.linux.ip_lib [None req-a5205b07-4b31-4252-a09b-8e57a9fb95b8 - - - - - -] Device tap645b08db-9c cannot be used as it has no MAC address#033[00m Oct 14 06:17:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 21 KiB/s wr, 65 op/s Oct 14 06:17:49 localhost nova_compute[295778]: 2025-10-14 10:17:49.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:49 localhost kernel: device tap645b08db-9c entered promiscuous mode Oct 14 06:17:49 localhost NetworkManager[5972]: [1760437069.3683] manager: (tap645b08db-9c): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Oct 14 06:17:49 localhost ovn_controller[156286]: 2025-10-14T10:17:49Z|00354|binding|INFO|Claiming lport 645b08db-9c45-49ac-a7b5-276d15e1039b for this chassis. Oct 14 06:17:49 localhost ovn_controller[156286]: 2025-10-14T10:17:49Z|00355|binding|INFO|645b08db-9c45-49ac-a7b5-276d15e1039b: Claiming unknown Oct 14 06:17:49 localhost nova_compute[295778]: 2025-10-14 10:17:49.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:49 localhost systemd-udevd[340367]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:49 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:49.382 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-4332f611-ef5b-4530-97fe-ac580679cec0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4332f611-ef5b-4530-97fe-ac580679cec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4979737-65a6-47b2-9379-ee1358b7d572, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=645b08db-9c45-49ac-a7b5-276d15e1039b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:49 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:49.384 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 645b08db-9c45-49ac-a7b5-276d15e1039b in datapath 4332f611-ef5b-4530-97fe-ac580679cec0 bound to our chassis#033[00m Oct 14 06:17:49 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:49.386 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4332f611-ef5b-4530-97fe-ac580679cec0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:17:49 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:49.387 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d42361-7c57-4ed4-8c3d-79eb21afe171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost ovn_controller[156286]: 2025-10-14T10:17:49Z|00356|binding|INFO|Setting lport 645b08db-9c45-49ac-a7b5-276d15e1039b ovn-installed in OVS Oct 14 06:17:49 localhost ovn_controller[156286]: 2025-10-14T10:17:49Z|00357|binding|INFO|Setting lport 645b08db-9c45-49ac-a7b5-276d15e1039b up in Southbound Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost nova_compute[295778]: 2025-10-14 10:17:49.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost journal[236030]: ethtool ioctl error on tap645b08db-9c: No such device Oct 14 06:17:49 localhost nova_compute[295778]: 2025-10-14 10:17:49.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:49 localhost nova_compute[295778]: 2025-10-14 10:17:49.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:50 localhost nova_compute[295778]: 2025-10-14 10:17:50.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:50 localhost systemd[1]: tmp-crun.H2d6L4.mount: Deactivated successfully. Oct 14 06:17:50 localhost dnsmasq[340221]: exiting on receipt of SIGTERM Oct 14 06:17:50 localhost podman[340431]: 2025-10-14 10:17:50.991431184 +0000 UTC m=+0.065455313 container kill 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:17:50 localhost systemd[1]: libpod-0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73.scope: Deactivated successfully. Oct 14 06:17:51 localhost podman[340460]: 2025-10-14 10:17:51.070880297 +0000 UTC m=+0.062878903 container died 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:17:51 localhost podman[340460]: 2025-10-14 10:17:51.117150479 +0000 UTC m=+0.109149045 container cleanup 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:17:51 localhost systemd[1]: libpod-conmon-0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73.scope: Deactivated successfully. Oct 14 06:17:51 localhost podman[340487]: Oct 14 06:17:51 localhost podman[340487]: 2025-10-14 10:17:51.175515502 +0000 UTC m=+0.093840418 container create 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:51 localhost systemd[1]: Started libpod-conmon-183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07.scope. Oct 14 06:17:51 localhost podman[340487]: 2025-10-14 10:17:51.129927468 +0000 UTC m=+0.048252434 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:51 localhost systemd[1]: Started libcrun container. Oct 14 06:17:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91460a53f75e0738d1a1d3710a3efc4bf382717fb6d4d3207ea9ecbdeef451d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:51 localhost podman[340487]: 2025-10-14 10:17:51.251537204 +0000 UTC m=+0.169862120 container init 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:51 localhost podman[340468]: 2025-10-14 10:17:51.257357729 +0000 UTC m=+0.233282168 container remove 0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b6f4995-d785-4c72-9bf7-d69a17bdd5eb, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:17:51 localhost podman[340487]: 2025-10-14 10:17:51.262020422 +0000 UTC m=+0.180345338 container start 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS) Oct 14 06:17:51 localhost dnsmasq[340512]: started, version 2.85 cachesize 150 Oct 14 06:17:51 localhost dnsmasq[340512]: DNS service limited to local subnets Oct 14 06:17:51 localhost dnsmasq[340512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:51 localhost dnsmasq[340512]: warning: no upstream servers configured Oct 14 06:17:51 localhost dnsmasq-dhcp[340512]: DHCP, static leases only on 10.100.255.240, lease time 1d Oct 14 06:17:51 localhost dnsmasq[340512]: read /var/lib/neutron/dhcp/4332f611-ef5b-4530-97fe-ac580679cec0/addn_hosts - 0 addresses Oct 14 06:17:51 localhost dnsmasq-dhcp[340512]: read /var/lib/neutron/dhcp/4332f611-ef5b-4530-97fe-ac580679cec0/host Oct 14 06:17:51 localhost dnsmasq-dhcp[340512]: read /var/lib/neutron/dhcp/4332f611-ef5b-4530-97fe-ac580679cec0/opts Oct 14 06:17:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 24 KiB/s wr, 150 op/s Oct 14 06:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:17:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:51 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:17:51.478+0000 7ff5d7f75640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:17:51.478+0000 7ff5d7f75640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:17:51.478+0000 7ff5d7f75640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:17:51.478+0000 7ff5d7f75640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:17:51.478+0000 7ff5d7f75640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:17:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:51.491 270389 INFO neutron.agent.dhcp.agent [None req-89dfe6e2-faf0-4176-8c31-fd43b5bdaa13 - - - - - -] DHCP configuration for ports {'896bd257-f315-499e-bf4d-9fd9357a414c'} is completed#033[00m Oct 14 06:17:51 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:51.540 270389 INFO neutron.agent.dhcp.agent [None req-15249ed4-003d-4f79-b6bd-3054b9fd07f6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta' Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "format": "json"}]: dispatch Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:17:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:17:51 localhost podman[340513]: 2025-10-14 10:17:51.599957963 +0000 UTC m=+0.137894189 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:51 localhost podman[340513]: 2025-10-14 10:17:51.608080619 +0000 UTC m=+0.146016845 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:17:51 localhost podman[340514]: 2025-10-14 10:17:51.562555968 +0000 UTC m=+0.099627591 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:17:51 localhost podman[340514]: 2025-10-14 10:17:51.645392062 +0000 UTC m=+0.182463715 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:17:51 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:17:51 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:17:51 localhost systemd[1]: tmp-crun.c5PYml.mount: Deactivated successfully. Oct 14 06:17:51 localhost systemd[1]: var-lib-containers-storage-overlay-90c03d68b31eaba4537e7d7443c8482e60c70f04822b163d1b980be13a23d9da-merged.mount: Deactivated successfully. Oct 14 06:17:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0831d4e3b69e202b3b9ab7072123255eeda0a8800b5771ac0f6353c28aeace73-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:51 localhost systemd[1]: run-netns-qdhcp\x2d4b6f4995\x2dd785\x2d4c72\x2d9bf7\x2dd69a17bdd5eb.mount: Deactivated successfully. Oct 14 06:17:52 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e42: np0005486731.swasqz(active, since 9m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:17:52 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:52.611 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:53 localhost nova_compute[295778]: 2025-10-14 10:17:53.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 271 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 142 op/s Oct 14 06:17:53 localhost nova_compute[295778]: 2025-10-14 10:17:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:17:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 317 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s Oct 14 06:17:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "snap_name": "c735fcf7-d165-4a2c-a41a-231ba337ec1b", "format": "json"}]: dispatch Oct 14 06:17:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:55 localhost nova_compute[295778]: 2025-10-14 10:17:55.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:17:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:17:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:17:56 localhost podman[340570]: 2025-10-14 10:17:56.538521998 +0000 UTC m=+0.078470048 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:56 localhost podman[340571]: 2025-10-14 10:17:56.595905064 +0000 UTC m=+0.133070610 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:56 localhost podman[340571]: 2025-10-14 10:17:56.610201736 +0000 UTC m=+0.147367242 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:17:56 localhost podman[340570]: 2025-10-14 10:17:56.626427256 +0000 UTC m=+0.166375356 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:17:56 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:17:56 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:17:56 localhost neutron_sriov_agent[263389]: 2025-10-14 10:17:56.794 2 INFO neutron.agent.securitygroups_rpc [None req-e740ef6c-0d76-4410-beb6-5f76db4534e4 72fde6d55cf34982a256eb50b9f6d56d 6b8394de28c74b2e99420d1b07ba3637 - - default default] Security group member updated ['c032904b-0f74-49ea-92f0-78e8713215a7']#033[00m Oct 14 06:17:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:17:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2252742044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:17:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:17:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2252742044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:17:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 317 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 111 op/s Oct 14 06:17:57 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:57.412 270389 INFO neutron.agent.linux.ip_lib [None req-643b9593-b2b4-40eb-9d3a-2a9cf0dddaf8 - - - - - -] Device tap1db65a61-4e cannot be used as it has no MAC address#033[00m Oct 14 06:17:57 localhost nova_compute[295778]: 2025-10-14 10:17:57.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:57 localhost kernel: device tap1db65a61-4e entered promiscuous mode Oct 14 06:17:57 localhost NetworkManager[5972]: [1760437077.4450] manager: (tap1db65a61-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Oct 14 06:17:57 localhost nova_compute[295778]: 2025-10-14 10:17:57.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:57 localhost ovn_controller[156286]: 2025-10-14T10:17:57Z|00358|binding|INFO|Claiming lport 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 for this chassis. Oct 14 06:17:57 localhost ovn_controller[156286]: 2025-10-14T10:17:57Z|00359|binding|INFO|1db65a61-4e41-42bd-bfac-42a3e97ac0f1: Claiming unknown Oct 14 06:17:57 localhost systemd-udevd[340619]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.465 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ea6a4a53034479f90ec8161c8b6ce29', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ea0b221-b5dd-4ca3-8af0-a987b93c6ed7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1db65a61-4e41-42bd-bfac-42a3e97ac0f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.466 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 in datapath f177f71f-9d59-40c6-9201-e1eb0d5e5b0c bound to our chassis#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.470 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port d0314bcb-582d-44fb-8a11-f1862f307e1a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.470 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.471 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[22607f56-7a2a-4153-9012-5362c5241176]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:57 localhost ovn_controller[156286]: 2025-10-14T10:17:57Z|00360|binding|INFO|Setting lport 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 ovn-installed in OVS Oct 14 06:17:57 localhost ovn_controller[156286]: 2025-10-14T10:17:57Z|00361|binding|INFO|Setting lport 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 up in Southbound Oct 14 06:17:57 localhost nova_compute[295778]: 2025-10-14 10:17:57.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:57 localhost nova_compute[295778]: 2025-10-14 10:17:57.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:57 localhost nova_compute[295778]: 2025-10-14 10:17:57.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.642 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.642 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:17:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:57.643 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:17:58 localhost nova_compute[295778]: 2025-10-14 10:17:58.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:58 localhost podman[340675]: Oct 14 06:17:58 localhost podman[340675]: 2025-10-14 10:17:58.617128857 +0000 UTC m=+0.107511471 container create 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:17:58 localhost podman[340675]: 2025-10-14 10:17:58.567859436 +0000 UTC m=+0.058242050 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:17:58 localhost systemd[1]: Started libpod-conmon-43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97.scope. Oct 14 06:17:58 localhost systemd[1]: Started libcrun container. Oct 14 06:17:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ecc7259bd3396219a55291d660f338f5c4ad634c2f3105abcc9e3e5d0cf080/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:17:58 localhost podman[340675]: 2025-10-14 10:17:58.703429293 +0000 UTC m=+0.193811857 container init 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:17:58 localhost podman[340675]: 2025-10-14 10:17:58.708309112 +0000 UTC m=+0.198691676 container start 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:17:58 localhost dnsmasq[340693]: started, version 2.85 cachesize 150 Oct 14 06:17:58 localhost dnsmasq[340693]: DNS service limited to local subnets Oct 14 06:17:58 localhost dnsmasq[340693]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:17:58 localhost dnsmasq[340693]: warning: no upstream servers configured Oct 14 06:17:58 localhost dnsmasq-dhcp[340693]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 14 06:17:58 localhost dnsmasq[340693]: read /var/lib/neutron/dhcp/f177f71f-9d59-40c6-9201-e1eb0d5e5b0c/addn_hosts - 0 addresses Oct 14 06:17:58 localhost dnsmasq-dhcp[340693]: read /var/lib/neutron/dhcp/f177f71f-9d59-40c6-9201-e1eb0d5e5b0c/host Oct 14 06:17:58 localhost dnsmasq-dhcp[340693]: read /var/lib/neutron/dhcp/f177f71f-9d59-40c6-9201-e1eb0d5e5b0c/opts Oct 14 06:17:58 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:58.870 270389 INFO neutron.agent.dhcp.agent [None req-f6e263cc-5452-459e-8f3c-bc5e3249e17a - - - - - -] DHCP configuration for ports {'5d339ab3-96ee-407a-88cc-cdbe8e69bafb'} is completed#033[00m Oct 14 06:17:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e158 do_prune osdmap full prune enabled Oct 14 06:17:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e159 e159: 6 total, 6 up, 6 in Oct 14 06:17:58 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e159: 6 total, 6 up, 6 in Oct 14 06:17:59 localhost dnsmasq[340693]: exiting on receipt of SIGTERM Oct 14 06:17:59 localhost podman[340711]: 2025-10-14 10:17:59.074851984 +0000 UTC m=+0.069876220 container kill 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:17:59 localhost systemd[1]: libpod-43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97.scope: Deactivated successfully. Oct 14 06:17:59 localhost podman[340724]: 2025-10-14 10:17:59.163439881 +0000 UTC m=+0.073645400 container died 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:59 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:59.183 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d0314bcb-582d-44fb-8a11-f1862f307e1a with type ""#033[00m Oct 14 06:17:59 localhost ovn_controller[156286]: 2025-10-14T10:17:59Z|00362|binding|INFO|Removing iface tap1db65a61-4e ovn-installed in OVS Oct 14 06:17:59 localhost ovn_controller[156286]: 2025-10-14T10:17:59Z|00363|binding|INFO|Removing lport 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 ovn-installed in OVS Oct 14 06:17:59 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:59.186 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ea6a4a53034479f90ec8161c8b6ce29', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ea0b221-b5dd-4ca3-8af0-a987b93c6ed7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1db65a61-4e41-42bd-bfac-42a3e97ac0f1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:17:59 localhost nova_compute[295778]: 2025-10-14 10:17:59.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:59 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:59.189 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 1db65a61-4e41-42bd-bfac-42a3e97ac0f1 in datapath f177f71f-9d59-40c6-9201-e1eb0d5e5b0c unbound from our chassis#033[00m Oct 14 06:17:59 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:59.192 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:17:59 localhost ovn_metadata_agent[161927]: 2025-10-14 10:17:59.194 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec306a9-c0d1-4980-acfa-f410fd0d4484]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:17:59 localhost nova_compute[295778]: 2025-10-14 10:17:59.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:59 localhost podman[340724]: 2025-10-14 10:17:59.196254284 +0000 UTC m=+0.106459763 container cleanup 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:17:59 localhost systemd[1]: libpod-conmon-43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97.scope: Deactivated successfully. Oct 14 06:17:59 localhost podman[340726]: 2025-10-14 10:17:59.241549899 +0000 UTC m=+0.141176097 container remove 43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f177f71f-9d59-40c6-9201-e1eb0d5e5b0c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:17:59 localhost nova_compute[295778]: 2025-10-14 10:17:59.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:59 localhost kernel: device tap1db65a61-4e left promiscuous mode Oct 14 06:17:59 localhost nova_compute[295778]: 2025-10-14 10:17:59.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:17:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:59.287 270389 INFO neutron.agent.dhcp.agent [None req-e2c5fa63-ac62-4963-9549-0bdee9ea4199 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 317 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s Oct 14 06:17:59 localhost systemd[1]: var-lib-containers-storage-overlay-a0ecc7259bd3396219a55291d660f338f5c4ad634c2f3105abcc9e3e5d0cf080-merged.mount: Deactivated successfully. Oct 14 06:17:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43af820778f83d5bcf98dbfaed39024c992505cacfd1db26f40d8a443b99cb97-userdata-shm.mount: Deactivated successfully. Oct 14 06:17:59 localhost systemd[1]: run-netns-qdhcp\x2df177f71f\x2d9d59\x2d40c6\x2d9201\x2de1eb0d5e5b0c.mount: Deactivated successfully. Oct 14 06:17:59 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:17:59.731 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:17:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:00 localhost nova_compute[295778]: 2025-10-14 10:18:00.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "snap_name": "c735fcf7-d165-4a2c-a41a-231ba337ec1b_287951fa-729f-414e-9029-4c898f3493d3", "force": true, "format": "json"}]: dispatch Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b_287951fa-729f-414e-9029-4c898f3493d3, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta' Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b_287951fa-729f-414e-9029-4c898f3493d3, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "snap_name": "c735fcf7-d165-4a2c-a41a-231ba337ec1b", "force": true, "format": "json"}]: dispatch Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta.tmp' to config b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7/.meta' Oct 14 06:18:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c735fcf7-d165-4a2c-a41a-231ba337ec1b, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:00 localhost podman[246584]: time="2025-10-14T10:18:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:18:00 localhost podman[246584]: @ - - [14/Oct/2025:10:18:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 14 06:18:00 localhost podman[246584]: @ - - [14/Oct/2025:10:18:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19355 "" "Go-http-client/1.1" Oct 14 06:18:00 localhost nova_compute[295778]: 2025-10-14 10:18:00.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:01 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:01.218 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 335 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 8.5 MiB/s wr, 108 op/s Oct 14 06:18:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:18:01.375 2 INFO neutron.agent.securitygroups_rpc [None req-b53395c4-e365-498d-b8c4-80ffc4a91847 72fde6d55cf34982a256eb50b9f6d56d 6b8394de28c74b2e99420d1b07ba3637 - - default default] Security group member updated ['c032904b-0f74-49ea-92f0-78e8713215a7']#033[00m Oct 14 06:18:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e159 do_prune osdmap full prune enabled Oct 14 06:18:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e160 e160: 6 total, 6 up, 6 in Oct 14 06:18:02 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e160: 6 total, 6 up, 6 in Oct 14 06:18:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:02.411 270389 INFO neutron.agent.linux.ip_lib [None req-0e65d312-c59a-4cfa-9a26-353f55907bcf - - - - - -] Device tapa8f55172-8a cannot be used as it has no MAC address#033[00m Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost kernel: device tapa8f55172-8a entered promiscuous mode Oct 14 06:18:02 localhost ovn_controller[156286]: 2025-10-14T10:18:02Z|00364|binding|INFO|Claiming lport a8f55172-8a32-4c96-9f30-a2f3398886b3 for this chassis. Oct 14 06:18:02 localhost ovn_controller[156286]: 2025-10-14T10:18:02Z|00365|binding|INFO|a8f55172-8a32-4c96-9f30-a2f3398886b3: Claiming unknown Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost NetworkManager[5972]: [1760437082.4928] manager: (tapa8f55172-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/66) Oct 14 06:18:02 localhost systemd-udevd[340765]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:18:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:02.499 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-671a4644-ad0d-456b-9b26-b22cf171c62d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671a4644-ad0d-456b-9b26-b22cf171c62d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57baa4bc-8319-41cd-97e6-f62b384db5fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a8f55172-8a32-4c96-9f30-a2f3398886b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:02.502 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a8f55172-8a32-4c96-9f30-a2f3398886b3 in datapath 671a4644-ad0d-456b-9b26-b22cf171c62d bound to our chassis#033[00m Oct 14 06:18:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:02.504 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 671a4644-ad0d-456b-9b26-b22cf171c62d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:02 localhost systemd-journald[47332]: Data hash table of /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Oct 14 06:18:02 localhost systemd-journald[47332]: /run/log/journal/8e1d5208cffec42b50976967e1d1cfd0/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 14 06:18:02 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 06:18:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:02.505 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b24a1456-77ce-4718-97dd-2e4e41bb7fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost ovn_controller[156286]: 2025-10-14T10:18:02Z|00366|binding|INFO|Setting lport a8f55172-8a32-4c96-9f30-a2f3398886b3 ovn-installed in OVS Oct 14 06:18:02 localhost ovn_controller[156286]: 2025-10-14T10:18:02Z|00367|binding|INFO|Setting lport a8f55172-8a32-4c96-9f30-a2f3398886b3 up in Southbound Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost journal[236030]: ethtool ioctl error on tapa8f55172-8a: No such device Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost nova_compute[295778]: 2025-10-14 10:18:02.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:02 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 14 06:18:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e160 do_prune osdmap full prune enabled Oct 14 06:18:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e161 e161: 6 total, 6 up, 6 in Oct 14 06:18:03 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e161: 6 total, 6 up, 6 in Oct 14 06:18:03 localhost openstack_network_exporter[248748]: ERROR 10:18:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:18:03 localhost openstack_network_exporter[248748]: ERROR 10:18:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:18:03 localhost openstack_network_exporter[248748]: ERROR 10:18:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:18:03 localhost openstack_network_exporter[248748]: ERROR 10:18:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:18:03 localhost openstack_network_exporter[248748]: Oct 14 06:18:03 localhost openstack_network_exporter[248748]: ERROR 10:18:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:18:03 localhost openstack_network_exporter[248748]: Oct 14 06:18:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 335 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 11 MiB/s wr, 101 op/s Oct 14 06:18:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "format": "json"}]: dispatch Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:03 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6c6fe18-c662-4c0a-9020-f80a84d9eaf7' of type subvolume Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.450+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6c6fe18-c662-4c0a-9020-f80a84d9eaf7' of type subvolume Oct 14 06:18:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c6c6fe18-c662-4c0a-9020-f80a84d9eaf7", "force": true, "format": "json"}]: dispatch Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:03 localhost nova_compute[295778]: 2025-10-14 10:18:03.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c6c6fe18-c662-4c0a-9020-f80a84d9eaf7'' moved to trashcan Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:18:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6c6fe18-c662-4c0a-9020-f80a84d9eaf7, vol_name:cephfs) < "" Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.505+0000 7ff5daf7b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.505+0000 7ff5daf7b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.505+0000 7ff5daf7b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.505+0000 7ff5daf7b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.505+0000 7ff5daf7b640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.554+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.554+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.554+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.554+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:03.554+0000 7ff5da77a640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:18:03 localhost podman[340835]: Oct 14 06:18:03 localhost podman[340835]: 2025-10-14 10:18:03.597974117 +0000 UTC m=+0.112397951 container create b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:18:03 localhost podman[340835]: 2025-10-14 10:18:03.536199353 +0000 UTC m=+0.050623247 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:18:03 localhost systemd[1]: Started libpod-conmon-b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545.scope. Oct 14 06:18:03 localhost systemd[1]: Started libcrun container. Oct 14 06:18:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2759f7d87920e2bf5c9fae2b43d3738ae9f3b1dfc381f5888787db8f40b27697/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:18:03 localhost ovn_controller[156286]: 2025-10-14T10:18:03Z|00368|binding|INFO|Removing iface tapa8f55172-8a ovn-installed in OVS Oct 14 06:18:03 localhost ovn_controller[156286]: 2025-10-14T10:18:03Z|00369|binding|INFO|Removing lport a8f55172-8a32-4c96-9f30-a2f3398886b3 ovn-installed in OVS Oct 14 06:18:03 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:03.655 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5d43e8d1-4c2d-4286-8b10-6e8867a9fc75 with type ""#033[00m Oct 14 06:18:03 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:03.657 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-671a4644-ad0d-456b-9b26-b22cf171c62d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671a4644-ad0d-456b-9b26-b22cf171c62d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57baa4bc-8319-41cd-97e6-f62b384db5fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a8f55172-8a32-4c96-9f30-a2f3398886b3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:03 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:03.659 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a8f55172-8a32-4c96-9f30-a2f3398886b3 in datapath 671a4644-ad0d-456b-9b26-b22cf171c62d unbound from our chassis#033[00m Oct 14 06:18:03 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:03.663 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 671a4644-ad0d-456b-9b26-b22cf171c62d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:18:03 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:03.690 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a03b0a-1287-4975-a3d8-69a0f079c6f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:03 localhost nova_compute[295778]: 2025-10-14 10:18:03.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:03 localhost nova_compute[295778]: 2025-10-14 10:18:03.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:03 localhost podman[340835]: 2025-10-14 10:18:03.695287596 +0000 UTC m=+0.209711430 container init b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:03 localhost podman[340835]: 2025-10-14 10:18:03.708510017 +0000 UTC m=+0.222933841 container start b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:03 localhost dnsmasq[340878]: started, version 2.85 cachesize 150 Oct 14 06:18:03 localhost dnsmasq[340878]: DNS service limited to local subnets Oct 14 06:18:03 localhost dnsmasq[340878]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:18:03 localhost dnsmasq[340878]: warning: no upstream servers configured Oct 14 06:18:03 localhost dnsmasq-dhcp[340878]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:18:03 localhost dnsmasq[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/addn_hosts - 0 addresses Oct 14 06:18:03 localhost dnsmasq-dhcp[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/host Oct 14 06:18:03 localhost dnsmasq-dhcp[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/opts Oct 14 06:18:03 localhost kernel: device tapa8f55172-8a left promiscuous mode Oct 14 06:18:03 localhost nova_compute[295778]: 2025-10-14 10:18:03.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:03 localhost nova_compute[295778]: 2025-10-14 10:18:03.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:03.947 270389 INFO neutron.agent.dhcp.agent [None req-08ea174b-9cd6-493d-a7bf-bba6ca5c17fc - - - - - -] DHCP configuration for ports {'615cc842-93ad-4af9-8bc3-adb104a04204'} is completed#033[00m Oct 14 06:18:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e161 do_prune osdmap full prune enabled Oct 14 06:18:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e43: np0005486731.swasqz(active, since 9m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:18:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e162 e162: 6 total, 6 up, 6 in Oct 14 06:18:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e162: 6 total, 6 up, 6 in Oct 14 06:18:04 localhost dnsmasq[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/addn_hosts - 0 addresses Oct 14 06:18:04 localhost podman[340898]: 2025-10-14 10:18:04.314304904 +0000 UTC m=+0.062009771 container kill b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:18:04 localhost dnsmasq-dhcp[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/host Oct 14 06:18:04 localhost dnsmasq-dhcp[340878]: read /var/lib/neutron/dhcp/671a4644-ad0d-456b-9b26-b22cf171c62d/opts Oct 14 06:18:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:18:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1483350122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:18:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:18:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1483350122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent [None req-272e537b-a895-4e25-8f08-d02d00aa4e90 - - - - - -] Unable to reload_allocations dhcp for 671a4644-ad0d-456b-9b26-b22cf171c62d.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa8f55172-8a not found in namespace qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d. Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa8f55172-8a not found in namespace qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d. Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.340 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.345 270389 INFO neutron.agent.dhcp.agent [None req-c9cf2e1f-f89a-45fa-a9d8-972ff4cdaa86 - - - - - -] Synchronizing state#033[00m Oct 14 06:18:04 localhost nova_compute[295778]: 2025-10-14 10:18:04.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:04 localhost systemd[1]: tmp-crun.4DoxQc.mount: Deactivated successfully. Oct 14 06:18:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:04.636 270389 INFO neutron.agent.dhcp.agent [None req-4b31c2e4-a246-4e6d-87d7-fedd1a3c51aa - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:18:04 localhost dnsmasq[340878]: exiting on receipt of SIGTERM Oct 14 06:18:04 localhost systemd[1]: libpod-b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545.scope: Deactivated successfully. Oct 14 06:18:04 localhost podman[340928]: 2025-10-14 10:18:04.824134208 +0000 UTC m=+0.066552682 container kill b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:18:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:18:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:18:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:04 localhost podman[340944]: 2025-10-14 10:18:04.922972097 +0000 UTC m=+0.078804088 container died b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true) Oct 14 06:18:05 localhost podman[340944]: 2025-10-14 10:18:05.060031593 +0000 UTC m=+0.215863524 container remove b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-671a4644-ad0d-456b-9b26-b22cf171c62d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:05 localhost systemd[1]: libpod-conmon-b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545.scope: Deactivated successfully. Oct 14 06:18:05 localhost podman[340956]: 2025-10-14 10:18:04.971524648 +0000 UTC m=+0.111972510 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:18:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:05.100 270389 INFO neutron.agent.dhcp.agent [None req-ad2aad74-298f-4481-a112-afc639cfb2a8 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:18:05 localhost podman[340956]: 2025-10-14 10:18:05.116057854 +0000 UTC m=+0.256505716 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:18:05 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:18:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e162 do_prune osdmap full prune enabled Oct 14 06:18:05 localhost podman[340953]: 2025-10-14 10:18:05.044099399 +0000 UTC m=+0.186627345 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:18:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e163 e163: 6 total, 6 up, 6 in Oct 14 06:18:05 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e163: 6 total, 6 up, 6 in Oct 14 06:18:05 localhost podman[340957]: 2025-10-14 10:18:05.119642749 +0000 UTC m=+0.253634828 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:18:05 localhost podman[340953]: 2025-10-14 10:18:05.179203483 +0000 UTC m=+0.321731439 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, version=9.6) Oct 14 06:18:05 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:18:05 localhost podman[340957]: 2025-10-14 10:18:05.202793291 +0000 UTC m=+0.336785370 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:18:05 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:18:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 479 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 131 KiB/s rd, 36 MiB/s wr, 192 op/s Oct 14 06:18:05 localhost systemd[1]: var-lib-containers-storage-overlay-2759f7d87920e2bf5c9fae2b43d3738ae9f3b1dfc381f5888787db8f40b27697-merged.mount: Deactivated successfully. Oct 14 06:18:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b093851d1d57159666e9bb7bdc33d303faad76ba36689fa840e6ca047e90c545-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:05 localhost systemd[1]: run-netns-qdhcp\x2d671a4644\x2dad0d\x2d456b\x2d9b26\x2db22cf171c62d.mount: Deactivated successfully. Oct 14 06:18:05 localhost nova_compute[295778]: 2025-10-14 10:18:05.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e44: np0005486731.swasqz(active, since 9m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:18:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e163 do_prune osdmap full prune enabled Oct 14 06:18:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e164 e164: 6 total, 6 up, 6 in Oct 14 06:18:07 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e164: 6 total, 6 up, 6 in Oct 14 06:18:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 479 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 123 KiB/s rd, 34 MiB/s wr, 181 op/s Oct 14 06:18:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e164 do_prune osdmap full prune enabled Oct 14 06:18:08 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e165 e165: 6 total, 6 up, 6 in Oct 14 06:18:08 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e165: 6 total, 6 up, 6 in Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.321996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088322039, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2361, "num_deletes": 260, "total_data_size": 3213965, "memory_usage": 3365136, "flush_reason": "Manual Compaction"} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088360638, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3151582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29103, "largest_seqno": 31463, "table_properties": {"data_size": 3141600, "index_size": 6359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22130, "raw_average_key_size": 21, "raw_value_size": 3121100, "raw_average_value_size": 3059, "num_data_blocks": 269, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436937, "oldest_key_time": 1760436937, "file_creation_time": 1760437088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 38694 microseconds, and 7280 cpu microseconds. Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.360690) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3151582 bytes OK Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.360768) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.362581) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.362605) EVENT_LOG_v1 {"time_micros": 1760437088362598, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.362628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3203946, prev total WAL file size 3203946, number of live WAL files 2. Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.363901) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3077KB)], [54(14MB)] Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088363946, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 18257942, "oldest_snapshot_seqno": -1} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 13033 keys, 17093616 bytes, temperature: kUnknown Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088443278, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17093616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17020127, "index_size": 39788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32645, "raw_key_size": 350950, "raw_average_key_size": 26, "raw_value_size": 16798937, "raw_average_value_size": 1288, "num_data_blocks": 1486, "num_entries": 13033, "num_filter_entries": 13033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:18:08 localhost nova_compute[295778]: 2025-10-14 10:18:08.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.443554) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17093616 bytes Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.471206) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.9 rd, 215.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 14.4 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(11.2) write-amplify(5.4) OK, records in: 13575, records dropped: 542 output_compression: NoCompression Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.471243) EVENT_LOG_v1 {"time_micros": 1760437088471226, "job": 32, "event": "compaction_finished", "compaction_time_micros": 79404, "compaction_time_cpu_micros": 49404, "output_level": 6, "num_output_files": 1, "total_output_size": 17093616, "num_input_records": 13575, "num_output_records": 13033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088471887, "job": 32, "event": "table_file_deletion", "file_number": 56} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437088473954, "job": 32, "event": "table_file_deletion", "file_number": 54} Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.363782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.474054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.474063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.474066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.474069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:08 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:08.474072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:18:09 Oct 14 06:18:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:18:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:18:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['volumes', '.mgr', 'manila_data', 'images', 'manila_metadata', 'vms', 'backups'] Oct 14 06:18:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:18:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e165 do_prune osdmap full prune enabled Oct 14 06:18:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 479 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 101 KiB/s rd, 28 MiB/s wr, 148 op/s Oct 14 06:18:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e166 e166: 6 total, 6 up, 6 in Oct 14 06:18:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e166: 6 total, 6 up, 6 in Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006584844796901173 of space, bias 1.0, pg target 1.3169689593802347 quantized to 32 (current 32) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.29584418402777773 quantized to 32 (current 32) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0190030738181649 of space, bias 1.0, pg target 3.781611689814815 quantized to 32 (current 32) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001597614810161921 quantized to 32 (current 32) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.325382700539736e-05 quantized to 32 (current 32) Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:18:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 1.0359959519821328e-05 of space, bias 4.0, pg target 0.008094581704820398 quantized to 16 (current 16) Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:18:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:18:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e166 do_prune osdmap full prune enabled Oct 14 06:18:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e167 e167: 6 total, 6 up, 6 in Oct 14 06:18:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e167: 6 total, 6 up, 6 in Oct 14 06:18:10 localhost nova_compute[295778]: 2025-10-14 10:18:10.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost ceph-mgr[300442]: [devicehealth INFO root] Check health Oct 14 06:18:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 615 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 136 KiB/s rd, 33 MiB/s wr, 193 op/s Oct 14 06:18:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e167 do_prune osdmap full prune enabled Oct 14 06:18:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e168 e168: 6 total, 6 up, 6 in Oct 14 06:18:11 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e168: 6 total, 6 up, 6 in Oct 14 06:18:11 localhost sshd[341039]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:18:11 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:11.871 270389 INFO neutron.agent.linux.ip_lib [None req-a809098f-2c47-4e5e-a55d-f68e9adc1ebb - - - - - -] Device tap17f36d7b-5e cannot be used as it has no MAC address#033[00m Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost kernel: device tap17f36d7b-5e entered promiscuous mode Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost systemd-udevd[341051]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:18:11 localhost NetworkManager[5972]: [1760437091.9106] manager: (tap17f36d7b-5e): new Generic device (/org/freedesktop/NetworkManager/Devices/67) Oct 14 06:18:11 localhost ovn_controller[156286]: 2025-10-14T10:18:11Z|00370|binding|INFO|Claiming lport 17f36d7b-5eee-4372-b9be-f4a22b1be04c for this chassis. Oct 14 06:18:11 localhost ovn_controller[156286]: 2025-10-14T10:18:11Z|00371|binding|INFO|17f36d7b-5eee-4372-b9be-f4a22b1be04c: Claiming unknown Oct 14 06:18:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:11.935 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-af652dfa-dec0-4338-aa80-93244162eed7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af652dfa-dec0-4338-aa80-93244162eed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6dc4753-28a8-4a6a-b9c0-cfd231255ce7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=17f36d7b-5eee-4372-b9be-f4a22b1be04c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:11.937 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 17f36d7b-5eee-4372-b9be-f4a22b1be04c in datapath af652dfa-dec0-4338-aa80-93244162eed7 bound to our chassis#033[00m Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:11.939 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network af652dfa-dec0-4338-aa80-93244162eed7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:11 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:11.941 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[19bbc7d7-8f14-40dc-b7d0-a9f2b82927ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost ovn_controller[156286]: 2025-10-14T10:18:11Z|00372|binding|INFO|Setting lport 17f36d7b-5eee-4372-b9be-f4a22b1be04c ovn-installed in OVS Oct 14 06:18:11 localhost ovn_controller[156286]: 2025-10-14T10:18:11Z|00373|binding|INFO|Setting lport 17f36d7b-5eee-4372-b9be-f4a22b1be04c up in Southbound Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost journal[236030]: ethtool ioctl error on tap17f36d7b-5e: No such device Oct 14 06:18:11 localhost nova_compute[295778]: 2025-10-14 10:18:11.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:12 localhost nova_compute[295778]: 2025-10-14 10:18:12.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:12 localhost podman[341122]: Oct 14 06:18:12 localhost podman[341122]: 2025-10-14 10:18:12.904780153 +0000 UTC m=+0.097964127 container create 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:12 localhost systemd[1]: Started libpod-conmon-1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756.scope. Oct 14 06:18:12 localhost podman[341122]: 2025-10-14 10:18:12.854078584 +0000 UTC m=+0.047262568 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:18:12 localhost systemd[1]: Started libcrun container. Oct 14 06:18:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad70b7a4dd3082fc30406f169cc30c591659d0e320823b673cdad3ba11906cee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:18:12 localhost podman[341122]: 2025-10-14 10:18:12.984075382 +0000 UTC m=+0.177259356 container init 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009) Oct 14 06:18:12 localhost podman[341122]: 2025-10-14 10:18:12.998449635 +0000 UTC m=+0.191633609 container start 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:13 localhost dnsmasq[341140]: started, version 2.85 cachesize 150 Oct 14 06:18:13 localhost dnsmasq[341140]: DNS service limited to local subnets Oct 14 06:18:13 localhost dnsmasq[341140]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:18:13 localhost dnsmasq[341140]: warning: no upstream servers configured Oct 14 06:18:13 localhost dnsmasq-dhcp[341140]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:18:13 localhost dnsmasq[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/addn_hosts - 0 addresses Oct 14 06:18:13 localhost dnsmasq-dhcp[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/host Oct 14 06:18:13 localhost dnsmasq-dhcp[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/opts Oct 14 06:18:13 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:13.183 270389 INFO neutron.agent.dhcp.agent [None req-08de638a-6a08-4686-8653-405a5bead526 - - - - - -] DHCP configuration for ports {'08e3ebdd-7c58-4c1e-95a7-c8b1ee388877'} is completed#033[00m Oct 14 06:18:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 615 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 111 KiB/s rd, 27 MiB/s wr, 157 op/s Oct 14 06:18:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e168 do_prune osdmap full prune enabled Oct 14 06:18:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e169 e169: 6 total, 6 up, 6 in Oct 14 06:18:13 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e169: 6 total, 6 up, 6 in Oct 14 06:18:13 localhost nova_compute[295778]: 2025-10-14 10:18:13.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:13 localhost ovn_controller[156286]: 2025-10-14T10:18:13Z|00374|binding|INFO|Removing iface tap17f36d7b-5e ovn-installed in OVS Oct 14 06:18:13 localhost ovn_controller[156286]: 2025-10-14T10:18:13Z|00375|binding|INFO|Removing lport 17f36d7b-5eee-4372-b9be-f4a22b1be04c ovn-installed in OVS Oct 14 06:18:13 localhost nova_compute[295778]: 2025-10-14 10:18:13.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:13.855 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port fc327523-f6cb-46b3-a8cf-28e6712f38a1 with type ""#033[00m Oct 14 06:18:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:13.857 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-af652dfa-dec0-4338-aa80-93244162eed7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af652dfa-dec0-4338-aa80-93244162eed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6dc4753-28a8-4a6a-b9c0-cfd231255ce7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=17f36d7b-5eee-4372-b9be-f4a22b1be04c) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:13.859 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 17f36d7b-5eee-4372-b9be-f4a22b1be04c in datapath af652dfa-dec0-4338-aa80-93244162eed7 unbound from our chassis#033[00m Oct 14 06:18:13 localhost nova_compute[295778]: 2025-10-14 10:18:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:13.862 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af652dfa-dec0-4338-aa80-93244162eed7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:18:13 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:13.863 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[39a6e940-e53b-4798-b46d-18ef1dd03be7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:13 localhost kernel: device tap17f36d7b-5e left promiscuous mode Oct 14 06:18:13 localhost nova_compute[295778]: 2025-10-14 10:18:13.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:13 localhost nova_compute[295778]: 2025-10-14 10:18:13.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:14 localhost podman[341158]: 2025-10-14 10:18:14.376345802 +0000 UTC m=+0.061602240 container kill 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:18:14 localhost dnsmasq[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/addn_hosts - 0 addresses Oct 14 06:18:14 localhost dnsmasq-dhcp[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/host Oct 14 06:18:14 localhost dnsmasq-dhcp[341140]: read /var/lib/neutron/dhcp/af652dfa-dec0-4338-aa80-93244162eed7/opts Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent [None req-56218e96-2851-43d6-87e3-8e16e7eeb2c1 - - - - - -] Unable to reload_allocations dhcp for af652dfa-dec0-4338-aa80-93244162eed7.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap17f36d7b-5e not found in namespace qdhcp-af652dfa-dec0-4338-aa80-93244162eed7. Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent return fut.result() Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent raise self._exception Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap17f36d7b-5e not found in namespace qdhcp-af652dfa-dec0-4338-aa80-93244162eed7. Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.401 270389 ERROR neutron.agent.dhcp.agent #033[00m Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.408 270389 INFO neutron.agent.dhcp.agent [None req-ad2aad74-298f-4481-a112-afc639cfb2a8 - - - - - -] Synchronizing state#033[00m Oct 14 06:18:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e169 do_prune osdmap full prune enabled Oct 14 06:18:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e170 e170: 6 total, 6 up, 6 in Oct 14 06:18:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e170: 6 total, 6 up, 6 in Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.547 270389 INFO neutron.agent.dhcp.agent [None req-c755dffc-6631-439b-a366-f313d96bf0d1 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.548 270389 INFO neutron.agent.dhcp.agent [-] Starting network af652dfa-dec0-4338-aa80-93244162eed7 dhcp configuration#033[00m Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.549 270389 INFO neutron.agent.dhcp.agent [-] Finished network af652dfa-dec0-4338-aa80-93244162eed7 dhcp configuration#033[00m Oct 14 06:18:14 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:14.549 270389 INFO neutron.agent.dhcp.agent [None req-c755dffc-6631-439b-a366-f313d96bf0d1 - - - - - -] Synchronizing state complete#033[00m Oct 14 06:18:14 localhost nova_compute[295778]: 2025-10-14 10:18:14.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:14 localhost dnsmasq[341140]: exiting on receipt of SIGTERM Oct 14 06:18:14 localhost podman[341189]: 2025-10-14 10:18:14.814412097 +0000 UTC m=+0.076283050 container kill 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Oct 14 06:18:14 localhost systemd[1]: libpod-1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756.scope: Deactivated successfully. Oct 14 06:18:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e170 do_prune osdmap full prune enabled Oct 14 06:18:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e171 e171: 6 total, 6 up, 6 in Oct 14 06:18:14 localhost podman[341204]: 2025-10-14 10:18:14.891978571 +0000 UTC m=+0.053356782 container died 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:18:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e171: 6 total, 6 up, 6 in Oct 14 06:18:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:14 localhost systemd[1]: var-lib-containers-storage-overlay-ad70b7a4dd3082fc30406f169cc30c591659d0e320823b673cdad3ba11906cee-merged.mount: Deactivated successfully. Oct 14 06:18:14 localhost podman[341204]: 2025-10-14 10:18:14.960251317 +0000 UTC m=+0.121629558 container remove 1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af652dfa-dec0-4338-aa80-93244162eed7, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:18:14 localhost systemd[1]: run-netns-qdhcp\x2daf652dfa\x2ddec0\x2d4338\x2daa80\x2d93244162eed7.mount: Deactivated successfully. Oct 14 06:18:14 localhost systemd[1]: libpod-conmon-1e3a44fd4f9162224ca1d2a480d7432e87762637f3af2181c9d80cee9df3a756.scope: Deactivated successfully. Oct 14 06:18:15 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 14 06:18:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 759 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 146 KiB/s rd, 36 MiB/s wr, 207 op/s Oct 14 06:18:15 localhost nova_compute[295778]: 2025-10-14 10:18:15.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e171 do_prune osdmap full prune enabled Oct 14 06:18:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e172 e172: 6 total, 6 up, 6 in Oct 14 06:18:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e172: 6 total, 6 up, 6 in Oct 14 06:18:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:18:16 localhost podman[341227]: 2025-10-14 10:18:16.543431085 +0000 UTC m=+0.083514803 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:18:16 localhost podman[341227]: 2025-10-14 10:18:16.581067786 +0000 UTC m=+0.121151514 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:18:16 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:18:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 759 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 146 KiB/s rd, 36 MiB/s wr, 207 op/s Oct 14 06:18:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:17.724 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:17 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:17.725 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:18:17 localhost nova_compute[295778]: 2025-10-14 10:18:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:18 localhost nova_compute[295778]: 2025-10-14 10:18:18.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e172 do_prune osdmap full prune enabled Oct 14 06:18:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e173 e173: 6 total, 6 up, 6 in Oct 14 06:18:18 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e173: 6 total, 6 up, 6 in Oct 14 06:18:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 759 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 121 KiB/s rd, 30 MiB/s wr, 171 op/s Oct 14 06:18:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e173 do_prune osdmap full prune enabled Oct 14 06:18:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e174 e174: 6 total, 6 up, 6 in Oct 14 06:18:19 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e174: 6 total, 6 up, 6 in Oct 14 06:18:20 localhost nova_compute[295778]: 2025-10-14 10:18:20.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 895 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 52 KiB/s rd, 23 MiB/s wr, 75 op/s Oct 14 06:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:18:22 localhost systemd[1]: tmp-crun.kpecGH.mount: Deactivated successfully. Oct 14 06:18:22 localhost podman[341249]: 2025-10-14 10:18:22.565141615 +0000 UTC m=+0.102282113 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:18:22 localhost podman[341249]: 2025-10-14 10:18:22.58300783 +0000 UTC m=+0.120148338 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:18:22 localhost podman[341248]: 2025-10-14 10:18:22.541796213 +0000 UTC m=+0.083768339 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:18:22 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:18:22 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:22.653 270389 INFO neutron.agent.linux.ip_lib [None req-72db78f1-5116-45d2-9eec-a83bc487387f - - - - - -] Device tap7a323803-5d cannot be used as it has no MAC address#033[00m Oct 14 06:18:22 localhost podman[341248]: 2025-10-14 10:18:22.657216744 +0000 UTC m=+0.199188890 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:18:22 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:18:22 localhost nova_compute[295778]: 2025-10-14 10:18:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:22 localhost kernel: device tap7a323803-5d entered promiscuous mode Oct 14 06:18:22 localhost NetworkManager[5972]: [1760437102.6849] manager: (tap7a323803-5d): new Generic device (/org/freedesktop/NetworkManager/Devices/68) Oct 14 06:18:22 localhost ovn_controller[156286]: 2025-10-14T10:18:22Z|00376|binding|INFO|Claiming lport 7a323803-5d22-4f3f-b15e-db9c1f89da9f for this chassis. Oct 14 06:18:22 localhost nova_compute[295778]: 2025-10-14 10:18:22.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:22 localhost ovn_controller[156286]: 2025-10-14T10:18:22Z|00377|binding|INFO|7a323803-5d22-4f3f-b15e-db9c1f89da9f: Claiming unknown Oct 14 06:18:22 localhost systemd-udevd[341297]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:22.715 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-c8701100-06fe-483b-ae49-708880d3790b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8701100-06fe-483b-ae49-708880d3790b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83cf982b-8e39-4620-a840-06c0e1535f47, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a323803-5d22-4f3f-b15e-db9c1f89da9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:22.717 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 7a323803-5d22-4f3f-b15e-db9c1f89da9f in datapath c8701100-06fe-483b-ae49-708880d3790b bound to our chassis#033[00m Oct 14 06:18:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:22.719 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c8701100-06fe-483b-ae49-708880d3790b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:22.720 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b68b2094-c07d-4296-a015-7a3dd6214987]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost ovn_controller[156286]: 2025-10-14T10:18:22Z|00378|binding|INFO|Setting lport 7a323803-5d22-4f3f-b15e-db9c1f89da9f ovn-installed in OVS Oct 14 06:18:22 localhost ovn_controller[156286]: 2025-10-14T10:18:22Z|00379|binding|INFO|Setting lport 7a323803-5d22-4f3f-b15e-db9c1f89da9f up in Southbound Oct 14 06:18:22 localhost nova_compute[295778]: 2025-10-14 10:18:22.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost journal[236030]: ethtool ioctl error on tap7a323803-5d: No such device Oct 14 06:18:22 localhost nova_compute[295778]: 2025-10-14 10:18:22.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:22 localhost nova_compute[295778]: 2025-10-14 10:18:22.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 895 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 42 KiB/s rd, 18 MiB/s wr, 61 op/s Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:23 localhost podman[341366]: Oct 14 06:18:23 localhost podman[341366]: 2025-10-14 10:18:23.601021622 +0000 UTC m=+0.084039816 container create 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:18:23 localhost systemd[1]: Started libpod-conmon-7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76.scope. Oct 14 06:18:23 localhost systemd[1]: Started libcrun container. Oct 14 06:18:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0cfc2bc10d8bb745cb9ed62ef76be9a1dd9d3245ec142d17be05a9c48d5310/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:18:23 localhost podman[341366]: 2025-10-14 10:18:23.559620602 +0000 UTC m=+0.042638846 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:18:23 localhost podman[341366]: 2025-10-14 10:18:23.666633878 +0000 UTC m=+0.149652072 container init 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:18:23 localhost podman[341366]: 2025-10-14 10:18:23.675469453 +0000 UTC m=+0.158487647 container start 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:18:23 localhost dnsmasq[341384]: started, version 2.85 cachesize 150 Oct 14 06:18:23 localhost dnsmasq[341384]: DNS service limited to local subnets Oct 14 06:18:23 localhost dnsmasq[341384]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:18:23 localhost dnsmasq[341384]: warning: no upstream servers configured Oct 14 06:18:23 localhost dnsmasq-dhcp[341384]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:18:23 localhost dnsmasq[341384]: read /var/lib/neutron/dhcp/c8701100-06fe-483b-ae49-708880d3790b/addn_hosts - 0 addresses Oct 14 06:18:23 localhost dnsmasq-dhcp[341384]: read /var/lib/neutron/dhcp/c8701100-06fe-483b-ae49-708880d3790b/host Oct 14 06:18:23 localhost dnsmasq-dhcp[341384]: read /var/lib/neutron/dhcp/c8701100-06fe-483b-ae49-708880d3790b/opts Oct 14 06:18:23 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:23.786 270389 INFO neutron.agent.dhcp.agent [None req-3feff33c-3c7e-430d-9ff0-c8823bf61cad - - - - - -] DHCP configuration for ports {'b092c620-91b6-4e56-8bfe-7f24fe059e2b'} is completed#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.929 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.931 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:18:23 localhost nova_compute[295778]: 2025-10-14 10:18:23.931 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:18:24 localhost ovn_controller[156286]: 2025-10-14T10:18:24Z|00380|binding|INFO|Removing iface tap7a323803-5d ovn-installed in OVS Oct 14 06:18:24 localhost ovn_controller[156286]: 2025-10-14T10:18:24Z|00381|binding|INFO|Removing lport 7a323803-5d22-4f3f-b15e-db9c1f89da9f ovn-installed in OVS Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:24.038 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 67a4c554-97ab-4631-a2fa-e84fa60a4a3d with type ""#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:24.041 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-c8701100-06fe-483b-ae49-708880d3790b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8701100-06fe-483b-ae49-708880d3790b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83cf982b-8e39-4620-a840-06c0e1535f47, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a323803-5d22-4f3f-b15e-db9c1f89da9f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:24.043 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 7a323803-5d22-4f3f-b15e-db9c1f89da9f in datapath c8701100-06fe-483b-ae49-708880d3790b unbound from our chassis#033[00m Oct 14 06:18:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:24.044 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c8701100-06fe-483b-ae49-708880d3790b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:24.045 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6ec913-ff29-44c8-808e-dd193086e26f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:24 localhost dnsmasq[341384]: exiting on receipt of SIGTERM Oct 14 06:18:24 localhost podman[341401]: 2025-10-14 10:18:24.050283095 +0000 UTC m=+0.067107646 container kill 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:18:24 localhost systemd[1]: libpod-7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76.scope: Deactivated successfully. Oct 14 06:18:24 localhost podman[341416]: 2025-10-14 10:18:24.116931098 +0000 UTC m=+0.053568847 container died 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:24 localhost podman[341416]: 2025-10-14 10:18:24.197360517 +0000 UTC m=+0.133998266 container cleanup 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 14 06:18:24 localhost systemd[1]: libpod-conmon-7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76.scope: Deactivated successfully. Oct 14 06:18:24 localhost podman[341424]: 2025-10-14 10:18:24.225612639 +0000 UTC m=+0.147099194 container remove 7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c8701100-06fe-483b-ae49-708880d3790b, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:24 localhost kernel: device tap7a323803-5d left promiscuous mode Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:24.352 270389 INFO neutron.agent.dhcp.agent [None req-2675f9e5-6050-4944-8b78-f5363d40d7a6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:24.353 270389 INFO neutron.agent.dhcp.agent [None req-2675f9e5-6050-4944-8b78-f5363d40d7a6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:18:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3408193388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.397 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:18:24 localhost systemd[1]: var-lib-containers-storage-overlay-8d0cfc2bc10d8bb745cb9ed62ef76be9a1dd9d3245ec142d17be05a9c48d5310-merged.mount: Deactivated successfully. Oct 14 06:18:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ab908f41dd104e57a89b31f008db6e3fd061baa8cc713d1fe4a00f35cfa9b76-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:24 localhost systemd[1]: run-netns-qdhcp\x2dc8701100\x2d06fe\x2d483b\x2dae49\x2d708880d3790b.mount: Deactivated successfully. Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.662 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.663 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11381MB free_disk=41.70014190673828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.663 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.664 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.736 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.736 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:18:24 localhost nova_compute[295778]: 2025-10-14 10:18:24.768 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:18:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e174 do_prune osdmap full prune enabled Oct 14 06:18:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e175 e175: 6 total, 6 up, 6 in Oct 14 06:18:24 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e175: 6 total, 6 up, 6 in Oct 14 06:18:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:18:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1389809063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.282 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.289 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.311 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.313 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.314 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:18:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 944 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 102 KiB/s rd, 39 MiB/s wr, 153 op/s Oct 14 06:18:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:18:25.611 2 INFO neutron.agent.securitygroups_rpc [req-546c2793-12af-430c-80b1-7cb6124afeaa req-bf258e6c-e801-49be-9d6e-83494c9ce496 4c194ea59b244432a9ec5417b8101ebe 5ac8b4aa702a449b8bf4a8039f977fc5 - - default default] Security group member updated ['8fe43e8a-a14a-430f-ba7d-c6a0fef96a1b']#033[00m Oct 14 06:18:25 localhost nova_compute[295778]: 2025-10-14 10:18:25.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:26.726 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:18:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 944 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 87 KiB/s rd, 33 MiB/s wr, 129 op/s Oct 14 06:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:18:27 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:27.623 270389 INFO neutron.agent.linux.ip_lib [None req-db140ee2-abb1-40b9-9525-6a20579278cd - - - - - -] Device tapd5985230-87 cannot be used as it has no MAC address#033[00m Oct 14 06:18:27 localhost podman[341486]: 2025-10-14 10:18:27.62891325 +0000 UTC m=+0.168424332 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:18:27 localhost podman[341487]: 2025-10-14 10:18:27.586695467 +0000 UTC m=+0.121587726 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:18:27 localhost podman[341486]: 2025-10-14 10:18:27.71611708 +0000 UTC m=+0.255628132 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true) Oct 14 06:18:27 localhost nova_compute[295778]: 2025-10-14 10:18:27.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:27 localhost kernel: device tapd5985230-87 entered promiscuous mode Oct 14 06:18:27 localhost NetworkManager[5972]: [1760437107.7297] manager: (tapd5985230-87): new Generic device (/org/freedesktop/NetworkManager/Devices/69) Oct 14 06:18:27 localhost ovn_controller[156286]: 2025-10-14T10:18:27Z|00382|binding|INFO|Claiming lport d5985230-8720-4d5e-8cb0-b3919a919ed0 for this chassis. Oct 14 06:18:27 localhost ovn_controller[156286]: 2025-10-14T10:18:27Z|00383|binding|INFO|d5985230-8720-4d5e-8cb0-b3919a919ed0: Claiming unknown Oct 14 06:18:27 localhost nova_compute[295778]: 2025-10-14 10:18:27.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:27 localhost systemd-udevd[341532]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:18:27 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:27.742 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-3631450e-2b6d-413b-aa35-1559ff0a66da', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3631450e-2b6d-413b-aa35-1559ff0a66da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66862107-f277-4cfe-a4b9-1227f09eeff9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5985230-8720-4d5e-8cb0-b3919a919ed0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:27 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:27.743 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d5985230-8720-4d5e-8cb0-b3919a919ed0 in datapath 3631450e-2b6d-413b-aa35-1559ff0a66da bound to our chassis#033[00m Oct 14 06:18:27 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:27.744 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3631450e-2b6d-413b-aa35-1559ff0a66da or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:27 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:27.745 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[94afbae8-4356-4b1b-b5af-893921d0f7c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:27 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost podman[341487]: 2025-10-14 10:18:27.771592126 +0000 UTC m=+0.306484395 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:18:27 localhost ovn_controller[156286]: 2025-10-14T10:18:27Z|00384|binding|INFO|Setting lport d5985230-8720-4d5e-8cb0-b3919a919ed0 ovn-installed in OVS Oct 14 06:18:27 localhost ovn_controller[156286]: 2025-10-14T10:18:27Z|00385|binding|INFO|Setting lport d5985230-8720-4d5e-8cb0-b3919a919ed0 up in Southbound Oct 14 06:18:27 localhost nova_compute[295778]: 2025-10-14 10:18:27.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost journal[236030]: ethtool ioctl error on tapd5985230-87: No such device Oct 14 06:18:27 localhost nova_compute[295778]: 2025-10-14 10:18:27.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:27 localhost nova_compute[295778]: 2025-10-14 10:18:27.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:28 localhost nova_compute[295778]: 2025-10-14 10:18:28.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:28 localhost podman[341639]: Oct 14 06:18:28 localhost podman[341639]: 2025-10-14 10:18:28.787345869 +0000 UTC m=+0.115174315 container create 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:18:28 localhost podman[341639]: 2025-10-14 10:18:28.729663794 +0000 UTC m=+0.057492270 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:18:28 localhost systemd[1]: Started libpod-conmon-6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71.scope. Oct 14 06:18:28 localhost systemd[1]: Started libcrun container. Oct 14 06:18:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5556b84245012c9b015e5a964cbc2f77d03df551e152817ff69ee083c305a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:18:28 localhost podman[341639]: 2025-10-14 10:18:28.896114643 +0000 UTC m=+0.223943089 container init 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:18:28 localhost podman[341639]: 2025-10-14 10:18:28.909620752 +0000 UTC m=+0.237449218 container start 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:18:28 localhost dnsmasq[341677]: started, version 2.85 cachesize 150 Oct 14 06:18:28 localhost dnsmasq[341677]: DNS service limited to local subnets Oct 14 06:18:28 localhost dnsmasq[341677]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:18:28 localhost dnsmasq[341677]: warning: no upstream servers configured Oct 14 06:18:28 localhost dnsmasq-dhcp[341677]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Oct 14 06:18:28 localhost dnsmasq[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/addn_hosts - 0 addresses Oct 14 06:18:28 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/host Oct 14 06:18:28 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/opts Oct 14 06:18:28 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:28.971 270389 INFO neutron.agent.dhcp.agent [None req-db140ee2-abb1-40b9-9525-6a20579278cd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:18:27Z, description=, device_id=753a124a-202a-47c3-a41b-b95c939939b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=608b8835-d78f-4d85-bed3-550589e30166, ip_allocation=immediate, mac_address=fa:16:3e:4f:19:26, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:18:25Z, description=, dns_domain=, id=3631450e-2b6d-413b-aa35-1559ff0a66da, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-49948722, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50490, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2635, status=ACTIVE, subnets=['0d24738f-d4bd-4ede-8769-815f5462db30'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:26Z, vlan_transparent=None, network_id=3631450e-2b6d-413b-aa35-1559ff0a66da, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2648, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:27Z on network 3631450e-2b6d-413b-aa35-1559ff0a66da#033[00m Oct 14 06:18:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:29.118 270389 INFO neutron.agent.dhcp.agent [None req-e076c1b1-6e19-4521-abf8-1cff13447eaa - - - - - -] DHCP configuration for ports {'eb226650-db84-4c81-ba69-ce46bf3057a2'} is completed#033[00m Oct 14 06:18:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e175 do_prune osdmap full prune enabled Oct 14 06:18:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e176 e176: 6 total, 6 up, 6 in Oct 14 06:18:29 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e176: 6 total, 6 up, 6 in Oct 14 06:18:29 localhost dnsmasq[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/addn_hosts - 1 addresses Oct 14 06:18:29 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/host Oct 14 06:18:29 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/opts Oct 14 06:18:29 localhost podman[341723]: 2025-10-14 10:18:29.269664351 +0000 UTC m=+0.070729232 container kill 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 944 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 48 KiB/s rd, 16 MiB/s wr, 73 op/s Oct 14 06:18:29 localhost nova_compute[295778]: 2025-10-14 10:18:29.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:29.456 270389 INFO neutron.agent.dhcp.agent [None req-db140ee2-abb1-40b9-9525-6a20579278cd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:18:27Z, description=, device_id=753a124a-202a-47c3-a41b-b95c939939b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=608b8835-d78f-4d85-bed3-550589e30166, ip_allocation=immediate, mac_address=fa:16:3e:4f:19:26, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:18:25Z, description=, dns_domain=, id=3631450e-2b6d-413b-aa35-1559ff0a66da, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-49948722, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50490, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2635, status=ACTIVE, subnets=['0d24738f-d4bd-4ede-8769-815f5462db30'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:26Z, vlan_transparent=None, network_id=3631450e-2b6d-413b-aa35-1559ff0a66da, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2648, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:27Z on network 3631450e-2b6d-413b-aa35-1559ff0a66da#033[00m Oct 14 06:18:29 localhost podman[341768]: 2025-10-14 10:18:29.478914348 +0000 UTC m=+0.103973268 container exec 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 14 06:18:29 localhost podman[341768]: 2025-10-14 10:18:29.615589423 +0000 UTC m=+0.240648353 container exec_died 9fcc1d89c3ab9656b45a0f275bbb212ca720d92aa93a13e8c695d7f3fbc424cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-crash-np0005486731, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, release=553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Oct 14 06:18:29 localhost dnsmasq[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/addn_hosts - 1 addresses Oct 14 06:18:29 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/host Oct 14 06:18:29 localhost podman[341815]: 2025-10-14 10:18:29.710102998 +0000 UTC m=+0.081202461 container kill 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:18:29 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/opts Oct 14 06:18:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:29.796 270389 INFO neutron.agent.dhcp.agent [None req-2a3b77bb-6e36-48c0-ace2-5c40b1e6160a - - - - - -] DHCP configuration for ports {'608b8835-d78f-4d85-bed3-550589e30166'} is completed#033[00m Oct 14 06:18:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.902759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437109902843, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 621, "num_deletes": 263, "total_data_size": 421794, "memory_usage": 433224, "flush_reason": "Manual Compaction"} Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437109910150, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 413188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31465, "largest_seqno": 32084, "table_properties": {"data_size": 409968, "index_size": 1139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7984, "raw_average_key_size": 19, "raw_value_size": 403209, "raw_average_value_size": 998, "num_data_blocks": 50, "num_entries": 404, "num_filter_entries": 404, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437088, "oldest_key_time": 1760437088, "file_creation_time": 1760437109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 7445 microseconds, and 2111 cpu microseconds. Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.910213) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 413188 bytes OK Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.910233) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.912033) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.912055) EVENT_LOG_v1 {"time_micros": 1760437109912048, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.912077) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 418336, prev total WAL file size 418660, number of live WAL files 2. Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.912784) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323634' seq:72057594037927935, type:22 .. '6C6F676D0034353137' seq:0, type:0; will stop at (end) Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(403KB)], [57(16MB)] Oct 14 06:18:29 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437109912861, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 17506804, "oldest_snapshot_seqno": -1} Oct 14 06:18:29 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:29.991 270389 INFO neutron.agent.dhcp.agent [None req-2dd27e72-fd24-47d6-a0c8-3e6bb9035fad - - - - - -] DHCP configuration for ports {'608b8835-d78f-4d85-bed3-550589e30166'} is completed#033[00m Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 12898 keys, 16869073 bytes, temperature: kUnknown Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437110041147, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16869073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16797111, "index_size": 38645, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32261, "raw_key_size": 349333, "raw_average_key_size": 27, "raw_value_size": 16578722, "raw_average_value_size": 1285, "num_data_blocks": 1426, "num_entries": 12898, "num_filter_entries": 12898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.041752) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16869073 bytes Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.046842) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.3 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.3 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(83.2) write-amplify(40.8) OK, records in: 13437, records dropped: 539 output_compression: NoCompression Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.046878) EVENT_LOG_v1 {"time_micros": 1760437110046859, "job": 34, "event": "compaction_finished", "compaction_time_micros": 128404, "compaction_time_cpu_micros": 48375, "output_level": 6, "num_output_files": 1, "total_output_size": 16869073, "num_input_records": 13437, "num_output_records": 12898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437110047092, "job": 34, "event": "table_file_deletion", "file_number": 59} Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437110050030, "job": 34, "event": "table_file_deletion", "file_number": 57} Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:29.912597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.050132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.050140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.050142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.050144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:18:30.050146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e176 do_prune osdmap full prune enabled Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e177 e177: 6 total, 6 up, 6 in Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e177: 6 total, 6 up, 6 in Oct 14 06:18:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost nova_compute[295778]: 2025-10-14 10:18:30.315 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:30 localhost podman[246584]: time="2025-10-14T10:18:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:18:30 localhost podman[246584]: @ - - [14/Oct/2025:10:18:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148135 "" "Go-http-client/1.1" Oct 14 06:18:30 localhost podman[246584]: @ - - [14/Oct/2025:10:18:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19849 "" "Go-http-client/1.1" Oct 14 06:18:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4532a019-ded1-417f-964d-fef426a6e328", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4532a019-ded1-417f-964d-fef426a6e328/.meta.tmp' Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4532a019-ded1-417f-964d-fef426a6e328/.meta.tmp' to config b'/volumes/_nogroup/4532a019-ded1-417f-964d-fef426a6e328/.meta' Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4532a019-ded1-417f-964d-fef426a6e328", "format": "json"}]: dispatch Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:30 localhost nova_compute[295778]: 2025-10-14 10:18:30.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:18:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 90 KiB/s rd, 40 MiB/s wr, 139 op/s Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm INFO root] Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 14 06:18:31 localhost ceph-mgr[300442]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mgr[300442]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:31 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev dc4e5a04-8e53-4672-b079-9a7d8ed19482 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:18:31 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev dc4e5a04-8e53-4672-b079-9a7d8ed19482 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:18:31 localhost ceph-mgr[300442]: [progress INFO root] Completed event dc4e5a04-8e53-4672-b079-9a7d8ed19482 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e177 do_prune osdmap full prune enabled Oct 14 06:18:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e178 e178: 6 total, 6 up, 6 in Oct 14 06:18:31 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e178: 6 total, 6 up, 6 in Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.919 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.919 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.920 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:31 localhost nova_compute[295778]: 2025-10-14 10:18:31.920 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:18:32 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486732.localdomain to 836.6M Oct 14 06:18:32 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486732.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 14 06:18:32 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486733.localdomain to 836.6M Oct 14 06:18:32 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486733.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 14 06:18:32 localhost ceph-mon[307093]: Adjusting osd_memory_target on np0005486731.localdomain to 836.6M Oct 14 06:18:32 localhost ceph-mon[307093]: Unable to set osd_memory_target on np0005486731.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:18:32 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e178 do_prune osdmap full prune enabled Oct 14 06:18:33 localhost openstack_network_exporter[248748]: ERROR 10:18:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:18:33 localhost openstack_network_exporter[248748]: ERROR 10:18:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:18:33 localhost openstack_network_exporter[248748]: ERROR 10:18:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:18:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e179 e179: 6 total, 6 up, 6 in Oct 14 06:18:33 localhost openstack_network_exporter[248748]: ERROR 10:18:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:18:33 localhost openstack_network_exporter[248748]: Oct 14 06:18:33 localhost openstack_network_exporter[248748]: ERROR 10:18:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:18:33 localhost openstack_network_exporter[248748]: Oct 14 06:18:33 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e179: 6 total, 6 up, 6 in Oct 14 06:18:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 47 KiB/s rd, 31 MiB/s wr, 75 op/s Oct 14 06:18:33 localhost nova_compute[295778]: 2025-10-14 10:18:33.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:33 localhost nova_compute[295778]: 2025-10-14 10:18:33.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:33 localhost nova_compute[295778]: 2025-10-14 10:18:33.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:33 localhost nova_compute[295778]: 2025-10-14 10:18:33.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:34 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:18:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:18:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta' Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "format": "json"}]: dispatch Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:18:34 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:18:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e179 do_prune osdmap full prune enabled Oct 14 06:18:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e180 e180: 6 total, 6 up, 6 in Oct 14 06:18:34 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e180: 6 total, 6 up, 6 in Oct 14 06:18:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 93 KiB/s rd, 25 MiB/s wr, 133 op/s Oct 14 06:18:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:18:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:18:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:18:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:18:35 localhost podman[342016]: 2025-10-14 10:18:35.588448607 +0000 UTC m=+0.119310645 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:35.603 270389 INFO neutron.agent.linux.ip_lib [None req-878ddf49-da0f-4d27-9e6f-27a2a8fd0199 - - - - - -] Device tap0625c5bf-02 cannot be used as it has no MAC address#033[00m Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost kernel: device tap0625c5bf-02 entered promiscuous mode Oct 14 06:18:35 localhost ovn_controller[156286]: 2025-10-14T10:18:35Z|00386|binding|INFO|Claiming lport 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 for this chassis. Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost NetworkManager[5972]: [1760437115.6440] manager: (tap0625c5bf-02): new Generic device (/org/freedesktop/NetworkManager/Devices/70) Oct 14 06:18:35 localhost ovn_controller[156286]: 2025-10-14T10:18:35Z|00387|binding|INFO|0625c5bf-02fe-4b01-851d-0a25a36ae8e0: Claiming unknown Oct 14 06:18:35 localhost systemd-udevd[342072]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:18:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:35.655 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-c3e195af-5952-46ea-9565-ee7badb5a289', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3e195af-5952-46ea-9565-ee7badb5a289', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a7a391e-1985-42e0-aca7-1afb8646f87b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0625c5bf-02fe-4b01-851d-0a25a36ae8e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:35.656 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 in datapath c3e195af-5952-46ea-9565-ee7badb5a289 bound to our chassis#033[00m Oct 14 06:18:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:35.658 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3e195af-5952-46ea-9565-ee7badb5a289 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:35 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:35.658 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5d61e7-1403-480c-aa23-fd4679f26339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost podman[342015]: 2025-10-14 10:18:35.674094205 +0000 UTC m=+0.205758114 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost ovn_controller[156286]: 2025-10-14T10:18:35Z|00388|binding|INFO|Setting lport 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 ovn-installed in OVS Oct 14 06:18:35 localhost ovn_controller[156286]: 2025-10-14T10:18:35Z|00389|binding|INFO|Setting lport 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 up in Southbound Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost podman[342015]: 2025-10-14 10:18:35.691241751 +0000 UTC m=+0.222905680 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible) Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost podman[342018]: 2025-10-14 10:18:35.700687343 +0000 UTC m=+0.225300765 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost journal[236030]: ethtool ioctl error on tap0625c5bf-02: No such device Oct 14 06:18:35 localhost podman[342016]: 2025-10-14 10:18:35.73105621 +0000 UTC m=+0.261918288 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:35 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:35 localhost podman[342018]: 2025-10-14 10:18:35.78553138 +0000 UTC m=+0.310144862 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:18:35 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:18:35 localhost nova_compute[295778]: 2025-10-14 10:18:35.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e180 do_prune osdmap full prune enabled Oct 14 06:18:36 localhost systemd[1]: tmp-crun.0tqqZy.mount: Deactivated successfully. Oct 14 06:18:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e181 e181: 6 total, 6 up, 6 in Oct 14 06:18:36 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e181: 6 total, 6 up, 6 in Oct 14 06:18:36 localhost podman[342160]: Oct 14 06:18:36 localhost podman[342160]: 2025-10-14 10:18:36.603237984 +0000 UTC m=+0.096741705 container create 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:18:36 localhost systemd[1]: Started libpod-conmon-515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25.scope. Oct 14 06:18:36 localhost podman[342160]: 2025-10-14 10:18:36.556407448 +0000 UTC m=+0.049911189 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:18:36 localhost systemd[1]: Started libcrun container. Oct 14 06:18:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7c00b776645fbb5e04fa1803abb2f407e2db02afd65c6ddd363693f9c5a9af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:18:36 localhost podman[342160]: 2025-10-14 10:18:36.68917815 +0000 UTC m=+0.182681891 container init 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:18:36 localhost podman[342160]: 2025-10-14 10:18:36.702466624 +0000 UTC m=+0.195970335 container start 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:18:36 localhost dnsmasq[342178]: started, version 2.85 cachesize 150 Oct 14 06:18:36 localhost dnsmasq[342178]: DNS service limited to local subnets Oct 14 06:18:36 localhost dnsmasq[342178]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:18:36 localhost dnsmasq[342178]: warning: no upstream servers configured Oct 14 06:18:36 localhost dnsmasq-dhcp[342178]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d Oct 14 06:18:36 localhost dnsmasq[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/addn_hosts - 0 addresses Oct 14 06:18:36 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/host Oct 14 06:18:36 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/opts Oct 14 06:18:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:36.762 270389 INFO neutron.agent.dhcp.agent [None req-878ddf49-da0f-4d27-9e6f-27a2a8fd0199 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:18:35Z, description=, device_id=753a124a-202a-47c3-a41b-b95c939939b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4b7ba025-3b25-4e01-8521-6ab39bea4428, ip_allocation=immediate, mac_address=fa:16:3e:e9:a3:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:18:33Z, description=, dns_domain=, id=c3e195af-5952-46ea-9565-ee7badb5a289, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1228353615, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9388, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2666, status=ACTIVE, subnets=['8a9620eb-7a17-4677-acff-2ea13d7b78cf'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:34Z, vlan_transparent=None, network_id=c3e195af-5952-46ea-9565-ee7badb5a289, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2676, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:35Z on network c3e195af-5952-46ea-9565-ee7badb5a289#033[00m Oct 14 06:18:36 localhost nova_compute[295778]: 2025-10-14 10:18:36.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:36 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:36.970 270389 INFO neutron.agent.dhcp.agent [None req-bf1150cc-573f-45b5-b25e-6ba5f13e4e1a - - - - - -] DHCP configuration for ports {'14e4825b-07d1-4e91-916c-9e18784e811c'} is completed#033[00m Oct 14 06:18:36 localhost dnsmasq[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/addn_hosts - 1 addresses Oct 14 06:18:36 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/host Oct 14 06:18:36 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/opts Oct 14 06:18:36 localhost podman[342198]: 2025-10-14 10:18:36.972900179 +0000 UTC m=+0.067221050 container kill 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:18:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:37.227 270389 INFO neutron.agent.dhcp.agent [None req-e5179f7f-0b2f-4a31-bba4-1e5b4a345115 - - - - - -] DHCP configuration for ports {'4b7ba025-3b25-4e01-8521-6ab39bea4428'} is completed#033[00m Oct 14 06:18:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 80 KiB/s rd, 22 MiB/s wr, 115 op/s Oct 14 06:18:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e181 do_prune osdmap full prune enabled Oct 14 06:18:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e182 e182: 6 total, 6 up, 6 in Oct 14 06:18:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e182: 6 total, 6 up, 6 in Oct 14 06:18:37 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:37.685 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:18:35Z, description=, device_id=753a124a-202a-47c3-a41b-b95c939939b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4b7ba025-3b25-4e01-8521-6ab39bea4428, ip_allocation=immediate, mac_address=fa:16:3e:e9:a3:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:18:33Z, description=, dns_domain=, id=c3e195af-5952-46ea-9565-ee7badb5a289, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1228353615, port_security_enabled=True, project_id=6b8394de28c74b2e99420d1b07ba3637, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9388, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2666, status=ACTIVE, subnets=['8a9620eb-7a17-4677-acff-2ea13d7b78cf'], tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:34Z, vlan_transparent=None, network_id=c3e195af-5952-46ea-9565-ee7badb5a289, port_security_enabled=False, project_id=6b8394de28c74b2e99420d1b07ba3637, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2676, status=DOWN, tags=[], tenant_id=6b8394de28c74b2e99420d1b07ba3637, updated_at=2025-10-14T10:18:35Z on network c3e195af-5952-46ea-9565-ee7badb5a289#033[00m Oct 14 06:18:37 localhost dnsmasq[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/addn_hosts - 1 addresses Oct 14 06:18:37 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/host Oct 14 06:18:37 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/opts Oct 14 06:18:37 localhost podman[342236]: 2025-10-14 10:18:37.897758323 +0000 UTC m=+0.063141061 container kill 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:18:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "snap_name": "fa09aca6-e753-4e87-bc49-7d269a164297", "format": "json"}]: dispatch Oct 14 06:18:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:38 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:38.185 270389 INFO neutron.agent.dhcp.agent [None req-686fcee7-0f06-4e68-bde8-1ade19207680 - - - - - -] DHCP configuration for ports {'4b7ba025-3b25-4e01-8521-6ab39bea4428'} is completed#033[00m Oct 14 06:18:38 localhost nova_compute[295778]: 2025-10-14 10:18:38.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:38 localhost nova_compute[295778]: 2025-10-14 10:18:38.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:18:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:18:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 79 KiB/s rd, 21 MiB/s wr, 113 op/s Oct 14 06:18:39 localhost nova_compute[295778]: 2025-10-14 10:18:39.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:18:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e182 do_prune osdmap full prune enabled Oct 14 06:18:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e183 e183: 6 total, 6 up, 6 in Oct 14 06:18:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e183: 6 total, 6 up, 6 in Oct 14 06:18:40 localhost nova_compute[295778]: 2025-10-14 10:18:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 192 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.7 MiB/s wr, 108 op/s Oct 14 06:18:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 192 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 2.3 MiB/s wr, 95 op/s Oct 14 06:18:43 localhost dnsmasq[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/addn_hosts - 0 addresses Oct 14 06:18:43 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/host Oct 14 06:18:43 localhost dnsmasq-dhcp[342178]: read /var/lib/neutron/dhcp/c3e195af-5952-46ea-9565-ee7badb5a289/opts Oct 14 06:18:43 localhost podman[342274]: 2025-10-14 10:18:43.478815801 +0000 UTC m=+0.066028498 container kill 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 06:18:43 localhost nova_compute[295778]: 2025-10-14 10:18:43.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:43 localhost nova_compute[295778]: 2025-10-14 10:18:43.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:43 localhost ovn_controller[156286]: 2025-10-14T10:18:43Z|00390|binding|INFO|Releasing lport 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 from this chassis (sb_readonly=0) Oct 14 06:18:43 localhost kernel: device tap0625c5bf-02 left promiscuous mode Oct 14 06:18:43 localhost ovn_controller[156286]: 2025-10-14T10:18:43Z|00391|binding|INFO|Setting lport 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 down in Southbound Oct 14 06:18:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:43.707 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-c3e195af-5952-46ea-9565-ee7badb5a289', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3e195af-5952-46ea-9565-ee7badb5a289', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a7a391e-1985-42e0-aca7-1afb8646f87b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0625c5bf-02fe-4b01-851d-0a25a36ae8e0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:43.708 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 0625c5bf-02fe-4b01-851d-0a25a36ae8e0 in datapath c3e195af-5952-46ea-9565-ee7badb5a289 unbound from our chassis#033[00m Oct 14 06:18:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:43.709 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3e195af-5952-46ea-9565-ee7badb5a289 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:43.710 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[9d993c10-7746-4aa5-857c-66e7e1e08d35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:43 localhost nova_compute[295778]: 2025-10-14 10:18:43.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:44 localhost dnsmasq[342178]: exiting on receipt of SIGTERM Oct 14 06:18:44 localhost podman[342313]: 2025-10-14 10:18:44.350887821 +0000 UTC m=+0.065047131 container kill 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:18:44 localhost systemd[1]: tmp-crun.uRPmaS.mount: Deactivated successfully. Oct 14 06:18:44 localhost systemd[1]: libpod-515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25.scope: Deactivated successfully. Oct 14 06:18:44 localhost podman[342325]: 2025-10-14 10:18:44.425323972 +0000 UTC m=+0.062004881 container died 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:18:44 localhost podman[342325]: 2025-10-14 10:18:44.459752827 +0000 UTC m=+0.096433686 container cleanup 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:44 localhost systemd[1]: libpod-conmon-515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25.scope: Deactivated successfully. Oct 14 06:18:44 localhost systemd[1]: var-lib-containers-storage-overlay-ca7c00b776645fbb5e04fa1803abb2f407e2db02afd65c6ddd363693f9c5a9af-merged.mount: Deactivated successfully. Oct 14 06:18:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:44 localhost podman[342334]: 2025-10-14 10:18:44.508652889 +0000 UTC m=+0.130048161 container remove 515a645d59866f65871381a7680a4a6af93dd314c7f2686bbe27a60daf6bfe25 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3e195af-5952-46ea-9565-ee7badb5a289, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:44 localhost systemd[1]: run-netns-qdhcp\x2dc3e195af\x2d5952\x2d46ea\x2d9565\x2dee7badb5a289.mount: Deactivated successfully. Oct 14 06:18:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:44.549 270389 INFO neutron.agent.dhcp.agent [None req-e6a3f10b-d772-4c22-a5e9-3f0c90549081 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:44.614 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:44 localhost nova_compute[295778]: 2025-10-14 10:18:44.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e183 do_prune osdmap full prune enabled Oct 14 06:18:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e184 e184: 6 total, 6 up, 6 in Oct 14 06:18:44 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e184: 6 total, 6 up, 6 in Oct 14 06:18:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 192 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 MiB/s wr, 85 op/s Oct 14 06:18:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "snap_name": "fa09aca6-e753-4e87-bc49-7d269a164297_006c7913-c1c0-441f-92b1-89554ad43f5b", "force": true, "format": "json"}]: dispatch Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297_006c7913-c1c0-441f-92b1-89554ad43f5b, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:46 localhost nova_compute[295778]: 2025-10-14 10:18:46.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta' Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297_006c7913-c1c0-441f-92b1-89554ad43f5b, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "snap_name": "fa09aca6-e753-4e87-bc49-7d269a164297", "force": true, "format": "json"}]: dispatch Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta.tmp' to config b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23/.meta' Oct 14 06:18:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa09aca6-e753-4e87-bc49-7d269a164297, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:47 localhost nova_compute[295778]: 2025-10-14 10:18:47.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 192 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 2.0 MiB/s wr, 83 op/s Oct 14 06:18:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:18:47 localhost systemd[1]: tmp-crun.GFHpgM.mount: Deactivated successfully. Oct 14 06:18:47 localhost podman[342358]: 2025-10-14 10:18:47.548375847 +0000 UTC m=+0.088082945 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:18:47 localhost podman[342358]: 2025-10-14 10:18:47.563042327 +0000 UTC m=+0.102749415 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:18:47 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:18:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:18:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1004005265' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:18:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:18:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1004005265' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:18:48 localhost nova_compute[295778]: 2025-10-14 10:18:48.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:48 localhost dnsmasq[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/addn_hosts - 0 addresses Oct 14 06:18:48 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/host Oct 14 06:18:48 localhost dnsmasq-dhcp[341677]: read /var/lib/neutron/dhcp/3631450e-2b6d-413b-aa35-1559ff0a66da/opts Oct 14 06:18:48 localhost podman[342394]: 2025-10-14 10:18:48.574911996 +0000 UTC m=+0.106763191 container kill 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:48 localhost nova_compute[295778]: 2025-10-14 10:18:48.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:48 localhost ovn_controller[156286]: 2025-10-14T10:18:48Z|00392|binding|INFO|Releasing lport d5985230-8720-4d5e-8cb0-b3919a919ed0 from this chassis (sb_readonly=0) Oct 14 06:18:48 localhost kernel: device tapd5985230-87 left promiscuous mode Oct 14 06:18:48 localhost ovn_controller[156286]: 2025-10-14T10:18:48Z|00393|binding|INFO|Setting lport d5985230-8720-4d5e-8cb0-b3919a919ed0 down in Southbound Oct 14 06:18:48 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:48.724 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-3631450e-2b6d-413b-aa35-1559ff0a66da', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3631450e-2b6d-413b-aa35-1559ff0a66da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b8394de28c74b2e99420d1b07ba3637', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66862107-f277-4cfe-a4b9-1227f09eeff9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5985230-8720-4d5e-8cb0-b3919a919ed0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:48 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:48.727 161932 INFO neutron.agent.ovn.metadata.agent [-] Port d5985230-8720-4d5e-8cb0-b3919a919ed0 in datapath 3631450e-2b6d-413b-aa35-1559ff0a66da unbound from our chassis#033[00m Oct 14 06:18:48 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:48.729 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3631450e-2b6d-413b-aa35-1559ff0a66da or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:18:48 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:48.732 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[93d80ac3-4354-4638-ad2d-0a44fc62d863]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:48 localhost nova_compute[295778]: 2025-10-14 10:18:48.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "format": "json"}]: dispatch Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:49 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6d6d1fa3-a36c-4fbc-8964-52769dbebb23' of type subvolume Oct 14 06:18:49 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:49.314+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6d6d1fa3-a36c-4fbc-8964-52769dbebb23' of type subvolume Oct 14 06:18:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6d6d1fa3-a36c-4fbc-8964-52769dbebb23", "force": true, "format": "json"}]: dispatch Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6d6d1fa3-a36c-4fbc-8964-52769dbebb23'' moved to trashcan Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:18:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6d6d1fa3-a36c-4fbc-8964-52769dbebb23, vol_name:cephfs) < "" Oct 14 06:18:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 192 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 1.7 MiB/s wr, 70 op/s Oct 14 06:18:49 localhost systemd[1]: tmp-crun.aOnI6f.mount: Deactivated successfully. Oct 14 06:18:49 localhost dnsmasq[341677]: exiting on receipt of SIGTERM Oct 14 06:18:49 localhost podman[342435]: 2025-10-14 10:18:49.479267145 +0000 UTC m=+0.070556268 container kill 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:18:49 localhost systemd[1]: libpod-6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71.scope: Deactivated successfully. Oct 14 06:18:49 localhost podman[342449]: 2025-10-14 10:18:49.552989556 +0000 UTC m=+0.059198205 container died 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:49 localhost podman[342449]: 2025-10-14 10:18:49.58466488 +0000 UTC m=+0.090873499 container cleanup 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:18:49 localhost systemd[1]: libpod-conmon-6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71.scope: Deactivated successfully. Oct 14 06:18:49 localhost podman[342450]: 2025-10-14 10:18:49.63656845 +0000 UTC m=+0.137461558 container remove 6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3631450e-2b6d-413b-aa35-1559ff0a66da, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009) Oct 14 06:18:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:49.661 270389 INFO neutron.agent.dhcp.agent [None req-23886c11-fb34-4eee-93d0-643a033bedf2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:49 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:49.853 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.973 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.974 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:18:49.982 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:18:50 localhost nova_compute[295778]: 2025-10-14 10:18:50.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:50 localhost systemd[1]: var-lib-containers-storage-overlay-fe5556b84245012c9b015e5a964cbc2f77d03df551e152817ff69ee083c305a3-merged.mount: Deactivated successfully. Oct 14 06:18:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bdbf3783f32edd5660333a2f5a29880dcfb6a6cfb1ebd354000438568c73c71-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:50 localhost systemd[1]: run-netns-qdhcp\x2d3631450e\x2d2b6d\x2d413b\x2daa35\x2d1559ff0a66da.mount: Deactivated successfully. Oct 14 06:18:51 localhost nova_compute[295778]: 2025-10-14 10:18:51.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e184 do_prune osdmap full prune enabled Oct 14 06:18:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e185 e185: 6 total, 6 up, 6 in Oct 14 06:18:51 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e185: 6 total, 6 up, 6 in Oct 14 06:18:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 192 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 15 KiB/s wr, 48 op/s Oct 14 06:18:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e185 do_prune osdmap full prune enabled Oct 14 06:18:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e186 e186: 6 total, 6 up, 6 in Oct 14 06:18:52 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e186: 6 total, 6 up, 6 in Oct 14 06:18:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/.meta.tmp' Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/.meta.tmp' to config b'/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/.meta' Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "format": "json"}]: dispatch Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:18:52 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:18:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4532a019-ded1-417f-964d-fef426a6e328", "format": "json"}]: dispatch Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4532a019-ded1-417f-964d-fef426a6e328, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4532a019-ded1-417f-964d-fef426a6e328, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4532a019-ded1-417f-964d-fef426a6e328' of type subvolume Oct 14 06:18:52 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:18:52.672+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4532a019-ded1-417f-964d-fef426a6e328' of type subvolume Oct 14 06:18:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4532a019-ded1-417f-964d-fef426a6e328", "force": true, "format": "json"}]: dispatch Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4532a019-ded1-417f-964d-fef426a6e328'' moved to trashcan Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:18:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4532a019-ded1-417f-964d-fef426a6e328, vol_name:cephfs) < "" Oct 14 06:18:53 localhost sshd[342479]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:18:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 192 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 46 op/s Oct 14 06:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:18:53 localhost systemd[1]: tmp-crun.vtWQdQ.mount: Deactivated successfully. Oct 14 06:18:53 localhost podman[342481]: 2025-10-14 10:18:53.558815847 +0000 UTC m=+0.100330351 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:18:53 localhost podman[342481]: 2025-10-14 10:18:53.598062261 +0000 UTC m=+0.139576775 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:18:53 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:18:53 localhost nova_compute[295778]: 2025-10-14 10:18:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:53 localhost podman[342482]: 2025-10-14 10:18:53.633422562 +0000 UTC m=+0.171619017 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:18:53 localhost podman[342482]: 2025-10-14 10:18:53.648489502 +0000 UTC m=+0.186686027 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:18:53 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:18:54 localhost systemd[1]: tmp-crun.411PvI.mount: Deactivated successfully. Oct 14 06:18:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:55 localhost podman[342537]: 2025-10-14 10:18:55.007674192 +0000 UTC m=+0.067537618 container kill 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:18:55 localhost dnsmasq[340512]: exiting on receipt of SIGTERM Oct 14 06:18:55 localhost systemd[1]: libpod-183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07.scope: Deactivated successfully. Oct 14 06:18:55 localhost ovn_controller[156286]: 2025-10-14T10:18:55Z|00394|binding|INFO|Removing iface tap645b08db-9c ovn-installed in OVS Oct 14 06:18:55 localhost ovn_controller[156286]: 2025-10-14T10:18:55Z|00395|binding|INFO|Removing lport 645b08db-9c45-49ac-a7b5-276d15e1039b ovn-installed in OVS Oct 14 06:18:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:55.043 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port e8ccd19a-7157-448d-b5a6-d0e6c4cf6a45 with type ""#033[00m Oct 14 06:18:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:55.044 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-4332f611-ef5b-4530-97fe-ac580679cec0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4332f611-ef5b-4530-97fe-ac580679cec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c4e628039e94868b41efbbdc1307f19', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4979737-65a6-47b2-9379-ee1358b7d572, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=645b08db-9c45-49ac-a7b5-276d15e1039b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:18:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:55.046 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 645b08db-9c45-49ac-a7b5-276d15e1039b in datapath 4332f611-ef5b-4530-97fe-ac580679cec0 unbound from our chassis#033[00m Oct 14 06:18:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:55.049 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4332f611-ef5b-4530-97fe-ac580679cec0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:18:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:55.050 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb0bc5d-f687-4426-ab08-352809116507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:18:55 localhost nova_compute[295778]: 2025-10-14 10:18:55.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:55 localhost nova_compute[295778]: 2025-10-14 10:18:55.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:55 localhost podman[342550]: 2025-10-14 10:18:55.107105328 +0000 UTC m=+0.080442502 container died 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:18:55 localhost podman[342550]: 2025-10-14 10:18:55.137335962 +0000 UTC m=+0.110673096 container cleanup 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:18:55 localhost systemd[1]: libpod-conmon-183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07.scope: Deactivated successfully. Oct 14 06:18:55 localhost podman[342552]: 2025-10-14 10:18:55.193409474 +0000 UTC m=+0.158881259 container remove 183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4332f611-ef5b-4530-97fe-ac580679cec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 14 06:18:55 localhost nova_compute[295778]: 2025-10-14 10:18:55.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:55 localhost kernel: device tap645b08db-9c left promiscuous mode Oct 14 06:18:55 localhost nova_compute[295778]: 2025-10-14 10:18:55.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:55.245 270389 INFO neutron.agent.dhcp.agent [None req-822dd6f3-6e20-41bf-baa0-e06e1e3778d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:55 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:18:55.246 270389 INFO neutron.agent.dhcp.agent [None req-822dd6f3-6e20-41bf-baa0-e06e1e3778d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:18:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 192 MiB data, 967 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 28 KiB/s wr, 99 op/s Oct 14 06:18:55 localhost nova_compute[295778]: 2025-10-14 10:18:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:55 localhost systemd[1]: var-lib-containers-storage-overlay-91460a53f75e0738d1a1d3710a3efc4bf382717fb6d4d3207ea9ecbdeef451d5-merged.mount: Deactivated successfully. Oct 14 06:18:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-183db8231017008e63b128765f8824ae20c47cc4528f69549ded9634f13f2d07-userdata-shm.mount: Deactivated successfully. Oct 14 06:18:55 localhost systemd[1]: run-netns-qdhcp\x2d4332f611\x2def5b\x2d4530\x2d97fe\x2dac580679cec0.mount: Deactivated successfully. Oct 14 06:18:56 localhost nova_compute[295778]: 2025-10-14 10:18:56.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 192 MiB data, 967 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 28 KiB/s wr, 99 op/s Oct 14 06:18:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:57.642 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:18:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:57.643 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:18:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:18:57.643 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:18:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/.meta.tmp' Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/.meta.tmp' to config b'/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/.meta' Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:18:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "format": "json"}]: dispatch Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:18:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:18:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:18:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:18:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:18:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:18:58 localhost podman[342582]: 2025-10-14 10:18:58.546358745 +0000 UTC m=+0.088908667 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:18:58 localhost podman[342582]: 2025-10-14 10:18:58.561189029 +0000 UTC m=+0.103738961 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid) Oct 14 06:18:58 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:18:58 localhost nova_compute[295778]: 2025-10-14 10:18:58.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:18:58 localhost podman[342583]: 2025-10-14 10:18:58.67022546 +0000 UTC m=+0.208193199 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:18:58 localhost podman[342583]: 2025-10-14 10:18:58.712127545 +0000 UTC m=+0.250095284 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 14 06:18:58 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:18:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 192 MiB data, 967 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 52 op/s Oct 14 06:18:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:18:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e186 do_prune osdmap full prune enabled Oct 14 06:18:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 e187: 6 total, 6 up, 6 in Oct 14 06:18:59 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e187: 6 total, 6 up, 6 in Oct 14 06:19:00 localhost podman[246584]: time="2025-10-14T10:19:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:19:00 localhost podman[246584]: @ - - [14/Oct/2025:10:19:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:19:00 localhost podman[246584]: @ - - [14/Oct/2025:10:19:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18898 "" "Go-http-client/1.1" Oct 14 06:19:01 localhost nova_compute[295778]: 2025-10-14 10:19:01.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:01 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b", "osd", "allow rw pool=manila_data namespace=fsvolumens_49fe1b10-39aa-4c6d-b588-71b3f9300e29", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b", "osd", "allow rw pool=manila_data namespace=fsvolumens_49fe1b10-39aa-4c6d-b588-71b3f9300e29", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b", "osd", "allow rw pool=manila_data namespace=fsvolumens_49fe1b10-39aa-4c6d-b588-71b3f9300e29", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 192 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 88 op/s Oct 14 06:19:01 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:01 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b", "osd", "allow rw pool=manila_data namespace=fsvolumens_49fe1b10-39aa-4c6d-b588-71b3f9300e29", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:01 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b", "osd", "allow rw pool=manila_data namespace=fsvolumens_49fe1b10-39aa-4c6d-b588-71b3f9300e29", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:01 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:01.931 2 INFO neutron.agent.securitygroups_rpc [None req-4d772f3b-ace0-4ef8-a89a-962408816e43 c858e15b48804013a3e03a1551996d0b f51b17b0ed0a40019c4fcd777d26b72d - - default default] Security group rule updated ['d9ec2c86-56aa-409c-8be6-91e4f9464bbb']#033[00m Oct 14 06:19:02 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:02.858 2 INFO neutron.agent.securitygroups_rpc [None req-68296946-0c64-4977-938d-88b8a7ab90fb bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:03 localhost openstack_network_exporter[248748]: ERROR 10:19:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:19:03 localhost openstack_network_exporter[248748]: ERROR 10:19:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:19:03 localhost openstack_network_exporter[248748]: ERROR 10:19:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:19:03 localhost openstack_network_exporter[248748]: ERROR 10:19:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:19:03 localhost openstack_network_exporter[248748]: Oct 14 06:19:03 localhost openstack_network_exporter[248748]: ERROR 10:19:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:19:03 localhost openstack_network_exporter[248748]: Oct 14 06:19:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 192 MiB data, 951 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 81 op/s Oct 14 06:19:03 localhost nova_compute[295778]: 2025-10-14 10:19:03.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:04.399 2 INFO neutron.agent.securitygroups_rpc [None req-3a636727-ac42-4163-b371-317e98c4e3cd bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:04.486 2 INFO neutron.agent.securitygroups_rpc [None req-3a636727-ac42-4163-b371-317e98c4e3cd bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29/d37910f0-cb97-4839-8350-ba32cb1ee48b Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "format": "json"}]: dispatch Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:04.756+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49fe1b10-39aa-4c6d-b588-71b3f9300e29' of type subvolume Oct 14 06:19:04 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49fe1b10-39aa-4c6d-b588-71b3f9300e29' of type subvolume Oct 14 06:19:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "49fe1b10-39aa-4c6d-b588-71b3f9300e29", "force": true, "format": "json"}]: dispatch Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/49fe1b10-39aa-4c6d-b588-71b3f9300e29'' moved to trashcan Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49fe1b10-39aa-4c6d-b588-71b3f9300e29, vol_name:cephfs) < "" Oct 14 06:19:04 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:04.796 2 INFO neutron.agent.securitygroups_rpc [None req-2b0cc61c-bbef-4d46-aa4f-f455510212c2 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:04.807 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:05 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:05.176 2 INFO neutron.agent.securitygroups_rpc [None req-2944b2cf-abf2-4c66-9fd7-e0cbff7f7cb3 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 238 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s Oct 14 06:19:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:05 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:05.760 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:06 localhost nova_compute[295778]: 2025-10-14 10:19:06.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:19:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:19:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:19:06 localhost systemd[1]: tmp-crun.MCMHrD.mount: Deactivated successfully. Oct 14 06:19:06 localhost podman[342621]: 2025-10-14 10:19:06.550947377 +0000 UTC m=+0.080516684 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:19:06 localhost podman[342621]: 2025-10-14 10:19:06.587471758 +0000 UTC m=+0.117041145 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:19:06 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:19:06 localhost podman[342619]: 2025-10-14 10:19:06.597624099 +0000 UTC m=+0.134840179 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 06:19:06 localhost podman[342619]: 2025-10-14 10:19:06.64129287 +0000 UTC m=+0.178508990 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:19:06 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:19:06 localhost podman[342620]: 2025-10-14 10:19:06.655451486 +0000 UTC m=+0.187283063 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible) Oct 14 06:19:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "19f2f741-666a-475e-bbf3-27d9eb249d4e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:06 localhost podman[342620]: 2025-10-14 10:19:06.726797435 +0000 UTC m=+0.258629122 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:19:06 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/19f2f741-666a-475e-bbf3-27d9eb249d4e/.meta.tmp' Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/19f2f741-666a-475e-bbf3-27d9eb249d4e/.meta.tmp' to config b'/volumes/_nogroup/19f2f741-666a-475e-bbf3-27d9eb249d4e/.meta' Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "19f2f741-666a-475e-bbf3-27d9eb249d4e", "format": "json"}]: dispatch Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:06 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 238 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s Oct 14 06:19:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/.meta.tmp' Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/.meta.tmp' to config b'/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/.meta' Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "format": "json"}]: dispatch Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:07 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:07 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:07.874 2 INFO neutron.agent.securitygroups_rpc [None req-95f9645e-0b06-4e03-a9e5-66080516d29e bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:08 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:08.350 2 INFO neutron.agent.securitygroups_rpc [None req-e4805ede-8e41-4486-ae97-356dc9144e24 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:08 localhost nova_compute[295778]: 2025-10-14 10:19:08.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:19:09 Oct 14 06:19:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:19:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:19:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'images', 'backups', '.mgr', 'volumes', 'vms'] Oct 14 06:19:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 238 MiB data, 1015 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029689462939698494 of space, bias 1.0, pg target 0.5927996100293133 quantized to 32 (current 32) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.416259538432906e-05 quantized to 32 (current 32) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021665038153731623 quantized to 32 (current 32) Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:19:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 3.9804054997208265e-05 of space, bias 4.0, pg target 0.03163095570444817 quantized to 16 (current 16) Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:19:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:19:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:10 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624", "osd", "allow rw pool=manila_data namespace=fsvolumens_d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624", "osd", "allow rw pool=manila_data namespace=fsvolumens_d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:10 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624", "osd", "allow rw pool=manila_data namespace=fsvolumens_d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:11 localhost nova_compute[295778]: 2025-10-14 10:19:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 192 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 137 op/s Oct 14 06:19:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624", "osd", "allow rw pool=manila_data namespace=fsvolumens_d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624", "osd", "allow rw pool=manila_data namespace=fsvolumens_d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "19f2f741-666a-475e-bbf3-27d9eb249d4e", "format": "json"}]: dispatch Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:12.183+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19f2f741-666a-475e-bbf3-27d9eb249d4e' of type subvolume Oct 14 06:19:12 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19f2f741-666a-475e-bbf3-27d9eb249d4e' of type subvolume Oct 14 06:19:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "19f2f741-666a-475e-bbf3-27d9eb249d4e", "force": true, "format": "json"}]: dispatch Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/19f2f741-666a-475e-bbf3-27d9eb249d4e'' moved to trashcan Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19f2f741-666a-475e-bbf3-27d9eb249d4e, vol_name:cephfs) < "" Oct 14 06:19:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 192 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 1.8 MiB/s wr, 98 op/s Oct 14 06:19:13 localhost nova_compute[295778]: 2025-10-14 10:19:13.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:13 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta' Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "format": "json"}]: dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:14 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:14 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:14.506 2 INFO neutron.agent.securitygroups_rpc [None req-88adcdd5-d424-4851-9b34-ed74fafb8707 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:14 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:14 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:14 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb/8dddc9a5-74bf-4406-b386-a72917ae6624 Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "format": "json"}]: dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:14.681+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd87b2b90-a5f1-4681-ae04-3fa3af31d3cb' of type subvolume Oct 14 06:19:14 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd87b2b90-a5f1-4681-ae04-3fa3af31d3cb' of type subvolume Oct 14 06:19:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d87b2b90-a5f1-4681-ae04-3fa3af31d3cb", "force": true, "format": "json"}]: dispatch Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d87b2b90-a5f1-4681-ae04-3fa3af31d3cb'' moved to trashcan Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d87b2b90-a5f1-4681-ae04-3fa3af31d3cb, vol_name:cephfs) < "" Oct 14 06:19:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:14 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:14.975 2 INFO neutron.agent.securitygroups_rpc [None req-e74f8eb6-5583-4f88-ba36-feb9c4fb0b0b bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:15.275 2 INFO neutron.agent.securitygroups_rpc [None req-718d9441-87bf-4ffb-9555-9941a2a03947 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 1.8 MiB/s wr, 103 op/s Oct 14 06:19:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:15.587 2 INFO neutron.agent.securitygroups_rpc [None req-0e2311c8-85a8-441e-8500-ba38920b937a bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:15 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:15.859 2 INFO neutron.agent.securitygroups_rpc [None req-8fbdee58-d50f-4ab3-9f88-3eacdf0ebeaa bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:16 localhost nova_compute[295778]: 2025-10-14 10:19:16.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:16 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:16.475 2 INFO neutron.agent.securitygroups_rpc [None req-7d0fd557-94a4-4212-93b1-9fe908adc855 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "5387cb38-4c9f-4b49-bada-60ce42bd2fd5", "format": "json"}]: dispatch Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 26 KiB/s wr, 64 op/s Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/.meta.tmp' Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/.meta.tmp' to config b'/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/.meta' Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "format": "json"}]: dispatch Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:19:18 localhost podman[342688]: 2025-10-14 10:19:18.544309126 +0000 UTC m=+0.082407433 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=edpm) Oct 14 06:19:18 localhost podman[342688]: 2025-10-14 10:19:18.581112405 +0000 UTC m=+0.119210732 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:19:18 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:19:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:18.680 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:18.681 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:19:18 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:18.683 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:19:18 localhost nova_compute[295778]: 2025-10-14 10:19:18.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:18 localhost nova_compute[295778]: 2025-10-14 10:19:18.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:19.033 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:18:bc 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9cb5f697-e34c-42db-aca7-5e486551dd6a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb5f697-e34c-42db-aca7-5e486551dd6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcfe5f47-18ab-4556-b7e8-874d7a7daff0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9c03674c-50ca-4ed4-9dac-c94e9dd6244c) old=Port_Binding(mac=['fa:16:3e:9f:18:bc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9cb5f697-e34c-42db-aca7-5e486551dd6a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb5f697-e34c-42db-aca7-5e486551dd6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:19.035 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9c03674c-50ca-4ed4-9dac-c94e9dd6244c in datapath 9cb5f697-e34c-42db-aca7-5e486551dd6a updated#033[00m Oct 14 06:19:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:19.037 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cb5f697-e34c-42db-aca7-5e486551dd6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:19:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:19.039 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[67dc89e1-ce29-4413-9f65-2706f582df99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:19:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 26 KiB/s wr, 64 op/s Oct 14 06:19:19 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:19.703 2 INFO neutron.agent.securitygroups_rpc [None req-a27717d6-07e5-4b5e-baa0-f1dd7df1cd15 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:20.338 2 INFO neutron.agent.securitygroups_rpc [None req-1cd2d25e-90be-4ec8-893b-f4c70d148c82 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:20 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:3e440449-fe29-4222-837b-3115389bed13, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:20 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a", "osd", "allow rw pool=manila_data namespace=fsvolumens_3e440449-fe29-4222-837b-3115389bed13", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a", "osd", "allow rw pool=manila_data namespace=fsvolumens_3e440449-fe29-4222-837b-3115389bed13", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a", "osd", "allow rw pool=manila_data namespace=fsvolumens_3e440449-fe29-4222-837b-3115389bed13", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:3e440449-fe29-4222-837b-3115389bed13, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:20 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "bf11fa89-3a4d-4bda-9fdb-3f14a4740a15", "format": "json"}]: dispatch Oct 14 06:19:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:20 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:20.870 2 INFO neutron.agent.securitygroups_rpc [None req-1e9de31c-96f9-4ac3-afa0-468e18743a4e bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e187 do_prune osdmap full prune enabled Oct 14 06:19:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e188 e188: 6 total, 6 up, 6 in Oct 14 06:19:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e188: 6 total, 6 up, 6 in Oct 14 06:19:21 localhost nova_compute[295778]: 2025-10-14 10:19:21.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:21 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:21.213 2 INFO neutron.agent.securitygroups_rpc [None req-1e8fc2ef-2179-4fb2-8c12-8f02c2ba1726 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 29 KiB/s wr, 12 op/s Oct 14 06:19:21 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:21 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a", "osd", "allow rw pool=manila_data namespace=fsvolumens_3e440449-fe29-4222-837b-3115389bed13", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:21 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a", "osd", "allow rw pool=manila_data namespace=fsvolumens_3e440449-fe29-4222-837b-3115389bed13", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:19:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3584085487' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:19:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:19:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3584085487' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:19:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 29 KiB/s wr, 12 op/s Oct 14 06:19:23 localhost nova_compute[295778]: 2025-10-14 10:19:23.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:19:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2127316937' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:19:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:19:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2127316937' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:19:24 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:24.071 2 INFO neutron.agent.securitygroups_rpc [None req-26a85657-c795-47cc-9b5a-acc54e4fee3f bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13/42adabe8-acdf-4e4b-8d5e-5994bafb229a Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "bf11fa89-3a4d-4bda-9fdb-3f14a4740a15_c5eacebc-3635-4fc5-9a38-5a95dc405608", "force": true, "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15_c5eacebc-3635-4fc5-9a38-5a95dc405608, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta' Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15_c5eacebc-3635-4fc5-9a38-5a95dc405608, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "bf11fa89-3a4d-4bda-9fdb-3f14a4740a15", "force": true, "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta' Oct 14 06:19:24 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:24.521 2 INFO neutron.agent.securitygroups_rpc [None req-8c653634-5465-440b-8a86-5c6019dd6e5b bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf11fa89-3a4d-4bda-9fdb-3f14a4740a15, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3e440449-fe29-4222-837b-3115389bed13", "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3e440449-fe29-4222-837b-3115389bed13, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3e440449-fe29-4222-837b-3115389bed13, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:24 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:24.543+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e440449-fe29-4222-837b-3115389bed13' of type subvolume Oct 14 06:19:24 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e440449-fe29-4222-837b-3115389bed13' of type subvolume Oct 14 06:19:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3e440449-fe29-4222-837b-3115389bed13", "force": true, "format": "json"}]: dispatch Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost systemd[1]: tmp-crun.7SXEve.mount: Deactivated successfully. Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3e440449-fe29-4222-837b-3115389bed13'' moved to trashcan Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e440449-fe29-4222-837b-3115389bed13, vol_name:cephfs) < "" Oct 14 06:19:24 localhost podman[342708]: 2025-10-14 10:19:24.578783887 +0000 UTC m=+0.106909905 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:19:24 localhost podman[342708]: 2025-10-14 10:19:24.61538284 +0000 UTC m=+0.143508848 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:19:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:24.619 270389 INFO neutron.agent.linux.ip_lib [None req-db074715-c3ec-439c-ac45-65ee296d0f2b - - - - - -] Device tapa8c29a23-22 cannot be used as it has no MAC address#033[00m Oct 14 06:19:24 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:24 localhost podman[342709]: 2025-10-14 10:19:24.648130392 +0000 UTC m=+0.173193789 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:19:24 localhost kernel: device tapa8c29a23-22 entered promiscuous mode Oct 14 06:19:24 localhost NetworkManager[5972]: [1760437164.6512] manager: (tapa8c29a23-22): new Generic device (/org/freedesktop/NetworkManager/Devices/71) Oct 14 06:19:24 localhost ovn_controller[156286]: 2025-10-14T10:19:24Z|00396|binding|INFO|Claiming lport a8c29a23-2274-4623-999e-8b6de62e0804 for this chassis. Oct 14 06:19:24 localhost ovn_controller[156286]: 2025-10-14T10:19:24Z|00397|binding|INFO|a8c29a23-2274-4623-999e-8b6de62e0804: Claiming unknown Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:24 localhost podman[342709]: 2025-10-14 10:19:24.656091154 +0000 UTC m=+0.181154561 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:19:24 localhost systemd-udevd[342756]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:19:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:24 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:19:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:24.671 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-e318568e-8943-40e6-8e4a-3245daf525e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e318568e-8943-40e6-8e4a-3245daf525e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed0b9b03-01d1-4245-a6a4-bb0e38ed6319, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=a8c29a23-2274-4623-999e-8b6de62e0804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:24.674 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a8c29a23-2274-4623-999e-8b6de62e0804 in datapath e318568e-8943-40e6-8e4a-3245daf525e6 bound to our chassis#033[00m Oct 14 06:19:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:24.677 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 058d93d4-0072-4afa-8f00-712d50f03ab7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:19:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:24.678 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e318568e-8943-40e6-8e4a-3245daf525e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:19:24 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:24.678 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[381ce4c8-8283-49c6-b37b-1788212b84ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost ovn_controller[156286]: 2025-10-14T10:19:24Z|00398|binding|INFO|Setting lport a8c29a23-2274-4623-999e-8b6de62e0804 ovn-installed in OVS Oct 14 06:19:24 localhost ovn_controller[156286]: 2025-10-14T10:19:24Z|00399|binding|INFO|Setting lport a8c29a23-2274-4623-999e-8b6de62e0804 up in Southbound Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost journal[236030]: ethtool ioctl error on tapa8c29a23-22: No such device Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.931 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:19:24 localhost nova_compute[295778]: 2025-10-14 10:19:24.932 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:19:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:25.062 2 INFO neutron.agent.securitygroups_rpc [None req-7bf340a9-d2d9-4152-b37b-1f681301b926 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 193 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 33 KiB/s wr, 111 op/s Oct 14 06:19:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:19:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3031716215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:19:25 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:25.428 2 INFO neutron.agent.securitygroups_rpc [None req-f3aa15d7-c869-4941-8ccc-15166138b527 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.430 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:19:25 localhost systemd[1]: tmp-crun.JjLA2G.mount: Deactivated successfully. Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.657 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.660 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11378MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.661 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.661 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:19:25 localhost podman[342850]: Oct 14 06:19:25 localhost podman[342850]: 2025-10-14 10:19:25.703062067 +0000 UTC m=+0.081699774 container create 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:19:25 localhost systemd[1]: Started libpod-conmon-80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd.scope. Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.745 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.745 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:19:25 localhost systemd[1]: Started libcrun container. Oct 14 06:19:25 localhost nova_compute[295778]: 2025-10-14 10:19:25.760 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:19:25 localhost podman[342850]: 2025-10-14 10:19:25.66482314 +0000 UTC m=+0.043460907 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:19:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bdad19c0719281175301ccacf3bbb2802cc34c555a130eb3b22d91867837269/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:19:25 localhost podman[342850]: 2025-10-14 10:19:25.775591497 +0000 UTC m=+0.154229204 container init 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009) Oct 14 06:19:25 localhost podman[342850]: 2025-10-14 10:19:25.785313355 +0000 UTC m=+0.163951072 container start 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:19:25 localhost dnsmasq[342869]: started, version 2.85 cachesize 150 Oct 14 06:19:25 localhost dnsmasq[342869]: DNS service limited to local subnets Oct 14 06:19:25 localhost dnsmasq[342869]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:19:25 localhost dnsmasq[342869]: warning: no upstream servers configured Oct 14 06:19:25 localhost dnsmasq-dhcp[342869]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:19:25 localhost dnsmasq[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/addn_hosts - 0 addresses Oct 14 06:19:25 localhost dnsmasq-dhcp[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/host Oct 14 06:19:25 localhost dnsmasq-dhcp[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/opts Oct 14 06:19:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:25.845 270389 INFO neutron.agent.dhcp.agent [None req-70f21e31-c04d-4925-9ece-3055f5cd19d0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:19:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0d55429a-7325-4095-a296-ddff99255a1d, ip_allocation=immediate, mac_address=fa:16:3e:c7:91:a4, name=tempest-PortsTestJSON-1148011176, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:19:22Z, description=, dns_domain=, id=e318568e-8943-40e6-8e4a-3245daf525e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1313773792, port_security_enabled=True, project_id=2cffabfb0ecf4b5d91a7a63dd17a370a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48849, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2908, status=ACTIVE, subnets=['386a6932-c849-4759-b99f-ce27d6b0ba2e'], tags=[], tenant_id=2cffabfb0ecf4b5d91a7a63dd17a370a, updated_at=2025-10-14T10:19:23Z, vlan_transparent=None, network_id=e318568e-8943-40e6-8e4a-3245daf525e6, port_security_enabled=True, project_id=2cffabfb0ecf4b5d91a7a63dd17a370a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533'], standard_attr_id=2913, status=DOWN, tags=[], tenant_id=2cffabfb0ecf4b5d91a7a63dd17a370a, updated_at=2025-10-14T10:19:23Z on network e318568e-8943-40e6-8e4a-3245daf525e6#033[00m Oct 14 06:19:25 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:25.964 270389 INFO neutron.agent.dhcp.agent [None req-4dcdec8c-d4a9-4245-a398-2b82319b9a56 - - - - - -] DHCP configuration for ports {'593c70f8-03be-4ae9-a4ea-7d88d1ba7fcb'} is completed#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost dnsmasq[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/addn_hosts - 1 addresses Oct 14 06:19:26 localhost dnsmasq-dhcp[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/host Oct 14 06:19:26 localhost podman[342904]: 2025-10-14 10:19:26.151535228 +0000 UTC m=+0.073842346 container kill 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:19:26 localhost dnsmasq-dhcp[342869]: read /var/lib/neutron/dhcp/e318568e-8943-40e6-8e4a-3245daf525e6/opts Oct 14 06:19:26 localhost ovn_controller[156286]: 2025-10-14T10:19:26Z|00400|binding|INFO|Removing iface tapa8c29a23-22 ovn-installed in OVS Oct 14 06:19:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:26.226 161932 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 058d93d4-0072-4afa-8f00-712d50f03ab7 with type ""#033[00m Oct 14 06:19:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:26.227 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-e318568e-8943-40e6-8e4a-3245daf525e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e318568e-8943-40e6-8e4a-3245daf525e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed0b9b03-01d1-4245-a6a4-bb0e38ed6319, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=a8c29a23-2274-4623-999e-8b6de62e0804) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:26.229 161932 INFO neutron.agent.ovn.metadata.agent [-] Port a8c29a23-2274-4623-999e-8b6de62e0804 in datapath e318568e-8943-40e6-8e4a-3245daf525e6 unbound from our chassis#033[00m Oct 14 06:19:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:26.231 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e318568e-8943-40e6-8e4a-3245daf525e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:19:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:26.232 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[8cca53c5-0d60-4c3c-8a8e-66ca5db18840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:19:26 localhost ovn_controller[156286]: 2025-10-14T10:19:26Z|00401|binding|INFO|Removing lport a8c29a23-2274-4623-999e-8b6de62e0804 ovn-installed in OVS Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:19:26 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3040503089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.265 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.271 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.286 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.289 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.289 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:19:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:26.388 270389 INFO neutron.agent.dhcp.agent [None req-f78d31a8-a0ad-4622-a2ad-edb663acd9cf - - - - - -] DHCP configuration for ports {'0d55429a-7325-4095-a296-ddff99255a1d'} is completed#033[00m Oct 14 06:19:26 localhost dnsmasq[342869]: exiting on receipt of SIGTERM Oct 14 06:19:26 localhost podman[342944]: 2025-10-14 10:19:26.59289886 +0000 UTC m=+0.065336209 container kill 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:19:26 localhost systemd[1]: libpod-80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd.scope: Deactivated successfully. Oct 14 06:19:26 localhost podman[342958]: 2025-10-14 10:19:26.665337717 +0000 UTC m=+0.059864623 container died 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:19:26 localhost podman[342958]: 2025-10-14 10:19:26.69668153 +0000 UTC m=+0.091208376 container cleanup 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:19:26 localhost systemd[1]: libpod-conmon-80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd.scope: Deactivated successfully. Oct 14 06:19:26 localhost podman[342960]: 2025-10-14 10:19:26.740548928 +0000 UTC m=+0.125384637 container remove 80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e318568e-8943-40e6-8e4a-3245daf525e6, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:19:26 localhost kernel: device tapa8c29a23-22 left promiscuous mode Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:26.810 270389 INFO neutron.agent.dhcp.agent [None req-a028798f-fb73-440e-8e15-4597b545c7c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:26.811 270389 INFO neutron.agent.dhcp.agent [None req-a028798f-fb73-440e-8e15-4597b545c7c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:26.812 270389 INFO neutron.agent.dhcp.agent [None req-a028798f-fb73-440e-8e15-4597b545c7c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:26 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:19:26.812 270389 INFO neutron.agent.dhcp.agent [None req-a028798f-fb73-440e-8e15-4597b545c7c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:19:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e188 do_prune osdmap full prune enabled Oct 14 06:19:26 localhost nova_compute[295778]: 2025-10-14 10:19:26.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e189 e189: 6 total, 6 up, 6 in Oct 14 06:19:26 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e189: 6 total, 6 up, 6 in Oct 14 06:19:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:27 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 193 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 41 KiB/s wr, 139 op/s Oct 14 06:19:27 localhost systemd[1]: var-lib-containers-storage-overlay-6bdad19c0719281175301ccacf3bbb2802cc34c555a130eb3b22d91867837269-merged.mount: Deactivated successfully. Oct 14 06:19:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80a47f675d82e6b3f5cc35ef3c37b600dbe6f80a14aeb4379ca3ff83868267cd-userdata-shm.mount: Deactivated successfully. Oct 14 06:19:27 localhost systemd[1]: run-netns-qdhcp\x2de318568e\x2d8943\x2d40e6\x2d8e4a\x2d3245daf525e6.mount: Deactivated successfully. Oct 14 06:19:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "5387cb38-4c9f-4b49-bada-60ce42bd2fd5_8c2d193c-1b93-4049-a25c-46924b3bc731", "force": true, "format": "json"}]: dispatch Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5_8c2d193c-1b93-4049-a25c-46924b3bc731, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta' Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5_8c2d193c-1b93-4049-a25c-46924b3bc731, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "snap_name": "5387cb38-4c9f-4b49-bada-60ce42bd2fd5", "force": true, "format": "json"}]: dispatch Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta.tmp' to config b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e/.meta' Oct 14 06:19:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5387cb38-4c9f-4b49-bada-60ce42bd2fd5, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:28 localhost nova_compute[295778]: 2025-10-14 10:19:28.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:28 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:28.952 2 INFO neutron.agent.securitygroups_rpc [None req-c2f9c371-9d2a-4b37-a033-6edbdd512e3d bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 193 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 32 KiB/s wr, 124 op/s Oct 14 06:19:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:19:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:19:29 localhost systemd[1]: tmp-crun.g5PVUD.mount: Deactivated successfully. Oct 14 06:19:29 localhost podman[342986]: 2025-10-14 10:19:29.604992863 +0000 UTC m=+0.141293251 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 14 06:19:29 localhost podman[342985]: 2025-10-14 10:19:29.561859365 +0000 UTC m=+0.100792533 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:19:29 localhost podman[342986]: 2025-10-14 10:19:29.616579021 +0000 UTC m=+0.152879449 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 14 06:19:29 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:19:29 localhost podman[342985]: 2025-10-14 10:19:29.641859534 +0000 UTC m=+0.180792692 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:19:29 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:19:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:30 localhost nova_compute[295778]: 2025-10-14 10:19:30.290 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:30 localhost podman[246584]: time="2025-10-14T10:19:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:19:30 localhost podman[246584]: @ - - [14/Oct/2025:10:19:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:19:30 localhost podman[246584]: @ - - [14/Oct/2025:10:19:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18901 "" "Go-http-client/1.1" Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07 Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:30 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:30.975 2 INFO neutron.agent.securitygroups_rpc [None req-e08f7c5d-e6e9-4662-9389-773b37e0faaa bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:30 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:30.987+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee7ca2ee-e381-4769-b9a6-4f389e42366e' of type subvolume Oct 14 06:19:30 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee7ca2ee-e381-4769-b9a6-4f389e42366e' of type subvolume Oct 14 06:19:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee7ca2ee-e381-4769-b9a6-4f389e42366e", "force": true, "format": "json"}]: dispatch Oct 14 06:19:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee7ca2ee-e381-4769-b9a6-4f389e42366e'' moved to trashcan Oct 14 06:19:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee7ca2ee-e381-4769-b9a6-4f389e42366e, vol_name:cephfs) < "" Oct 14 06:19:31 localhost nova_compute[295778]: 2025-10-14 10:19:31.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 193 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 41 KiB/s wr, 109 op/s Oct 14 06:19:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:31 localhost nova_compute[295778]: 2025-10-14 10:19:31.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:31 localhost nova_compute[295778]: 2025-10-14 10:19:31.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:19:31 localhost nova_compute[295778]: 2025-10-14 10:19:31.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:19:31 localhost nova_compute[295778]: 2025-10-14 10:19:31.917 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:19:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:19:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:19:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:19:32 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 85171d30-81bd-4c9e-8b61-0bb5a769dee1 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:19:32 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 85171d30-81bd-4c9e-8b61-0bb5a769dee1 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:19:32 localhost ceph-mgr[300442]: [progress INFO root] Completed event 85171d30-81bd-4c9e-8b61-0bb5a769dee1 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:19:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e189 do_prune osdmap full prune enabled Oct 14 06:19:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e190 e190: 6 total, 6 up, 6 in Oct 14 06:19:32 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e190: 6 total, 6 up, 6 in Oct 14 06:19:32 localhost nova_compute[295778]: 2025-10-14 10:19:32.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:33 localhost openstack_network_exporter[248748]: ERROR 10:19:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:19:33 localhost openstack_network_exporter[248748]: ERROR 10:19:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:19:33 localhost openstack_network_exporter[248748]: Oct 14 06:19:33 localhost openstack_network_exporter[248748]: ERROR 10:19:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:19:33 localhost openstack_network_exporter[248748]: ERROR 10:19:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:19:33 localhost openstack_network_exporter[248748]: ERROR 10:19:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:19:33 localhost openstack_network_exporter[248748]: Oct 14 06:19:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 193 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 17 KiB/s wr, 5 op/s Oct 14 06:19:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:19:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:19:33 localhost nova_compute[295778]: 2025-10-14 10:19:33.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:33 localhost nova_compute[295778]: 2025-10-14 10:19:33.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:33 localhost nova_compute[295778]: 2025-10-14 10:19:33.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:19:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c495245-4d84-4632-8a3d-99388b40ac14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c495245-4d84-4632-8a3d-99388b40ac14/.meta.tmp' Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c495245-4d84-4632-8a3d-99388b40ac14/.meta.tmp' to config b'/volumes/_nogroup/6c495245-4d84-4632-8a3d-99388b40ac14/.meta' Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c495245-4d84-4632-8a3d-99388b40ac14", "format": "json"}]: dispatch Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:34 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:34 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:19:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:19:34 localhost nova_compute[295778]: 2025-10-14 10:19:34.901 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:34 localhost nova_compute[295778]: 2025-10-14 10:19:34.902 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e190 do_prune osdmap full prune enabled Oct 14 06:19:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e191 e191: 6 total, 6 up, 6 in Oct 14 06:19:34 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e191: 6 total, 6 up, 6 in Oct 14 06:19:35 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:35.236 2 INFO neutron.agent.securitygroups_rpc [None req-2474946b-4b63-4e01-8a5d-90b033b7cb5e bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['aef71ad5-f79f-4ece-9506-eb534e5871f7']#033[00m Oct 14 06:19:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 59 KiB/s wr, 35 op/s Oct 14 06:19:36 localhost nova_compute[295778]: 2025-10-14 10:19:36.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:36.917 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:35:97 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=857fac98-c81f-4d46-906d-419db1a00d28, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3c920156-fad7-49d0-b0fe-2d7f43047b0e) old=Port_Binding(mac=['fa:16:3e:f2:35:97 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:36.919 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3c920156-fad7-49d0-b0fe-2d7f43047b0e in datapath ac6dfafe-fe19-467d-95fd-e237a973e4b9 updated#033[00m Oct 14 06:19:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:36.921 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac6dfafe-fe19-467d-95fd-e237a973e4b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:19:36 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:36.922 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[b70d946d-996f-49c1-9269-1cb7b284dbe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:19:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "6c495245-4d84-4632-8a3d-99388b40ac14", "new_size": 2147483648, "format": "json"}]: dispatch Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 59 KiB/s wr, 35 op/s Oct 14 06:19:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:19:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:19:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:19:37 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:37.463 2 INFO neutron.agent.securitygroups_rpc [None req-64d9ba1f-f4f6-4f94-b760-17fb755a2c7e bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['aef71ad5-f79f-4ece-9506-eb534e5871f7', '4125c890-e798-4a40-8152-43496831500b']#033[00m Oct 14 06:19:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:37 localhost podman[343108]: 2025-10-14 10:19:37.561412043 +0000 UTC m=+0.095175723 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public) Oct 14 06:19:37 localhost podman[343108]: 2025-10-14 10:19:37.573840973 +0000 UTC m=+0.107604643 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:37 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07 Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:37 localhost podman[343110]: 2025-10-14 10:19:37.622879508 +0000 UTC m=+0.146761306 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:19:37 localhost podman[343109]: 2025-10-14 10:19:37.681265972 +0000 UTC m=+0.208711454 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller) Oct 14 06:19:37 localhost podman[343110]: 2025-10-14 10:19:37.707428837 +0000 UTC m=+0.231310665 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:19:37 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:19:37 localhost podman[343109]: 2025-10-14 10:19:37.748271883 +0000 UTC m=+0.275717335 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:19:37 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:19:37 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:37.785 2 INFO neutron.agent.securitygroups_rpc [None req-5640d12e-d220-4be6-9c7b-bac66bab8d2c bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['4125c890-e798-4a40-8152-43496831500b']#033[00m Oct 14 06:19:37 localhost nova_compute[295778]: 2025-10-14 10:19:37.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:38 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:38 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:38 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:38 localhost nova_compute[295778]: 2025-10-14 10:19:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:19:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:19:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 42 KiB/s wr, 29 op/s Oct 14 06:19:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e191 do_prune osdmap full prune enabled Oct 14 06:19:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 e192: 6 total, 6 up, 6 in Oct 14 06:19:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e192: 6 total, 6 up, 6 in Oct 14 06:19:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c495245-4d84-4632-8a3d-99388b40ac14", "format": "json"}]: dispatch Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c495245-4d84-4632-8a3d-99388b40ac14, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c495245-4d84-4632-8a3d-99388b40ac14, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:40 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:40.673+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c495245-4d84-4632-8a3d-99388b40ac14' of type subvolume Oct 14 06:19:40 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c495245-4d84-4632-8a3d-99388b40ac14' of type subvolume Oct 14 06:19:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c495245-4d84-4632-8a3d-99388b40ac14", "force": true, "format": "json"}]: dispatch Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c495245-4d84-4632-8a3d-99388b40ac14'' moved to trashcan Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c495245-4d84-4632-8a3d-99388b40ac14, vol_name:cephfs) < "" Oct 14 06:19:40 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:40.842 2 INFO neutron.agent.securitygroups_rpc [None req-07371dd3-5e9f-44d8-b05e-23650af26a99 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['20ea9413-8d37-4d17-a266-e02ebbcd4097']#033[00m Oct 14 06:19:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:40 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:40 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:40 localhost nova_compute[295778]: 2025-10-14 10:19:40.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:19:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:40 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:40 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:41 localhost nova_compute[295778]: 2025-10-14 10:19:41.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 54 KiB/s wr, 32 op/s Oct 14 06:19:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:42.508 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:35:97 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=857fac98-c81f-4d46-906d-419db1a00d28, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3c920156-fad7-49d0-b0fe-2d7f43047b0e) old=Port_Binding(mac=['fa:16:3e:f2:35:97 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac6dfafe-fe19-467d-95fd-e237a973e4b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cffabfb0ecf4b5d91a7a63dd17a370a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:19:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:42.510 161932 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3c920156-fad7-49d0-b0fe-2d7f43047b0e in datapath ac6dfafe-fe19-467d-95fd-e237a973e4b9 updated#033[00m Oct 14 06:19:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:42.512 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac6dfafe-fe19-467d-95fd-e237a973e4b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:19:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:42.513 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[6314e9ab-d344-4c14-9583-73f69ce0dc7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:19:42 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:42.885 2 INFO neutron.agent.securitygroups_rpc [None req-55447faf-c0cc-43a9-8209-8c9d04c1af5b bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['20ea9413-8d37-4d17-a266-e02ebbcd4097', '78dc3557-4401-4e68-b797-8af5d01655e7', 'f475abc8-1ecb-46b5-aee7-314e00187e8e']#033[00m Oct 14 06:19:43 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:43.177 2 INFO neutron.agent.securitygroups_rpc [None req-43b9eac2-1b77-4acb-89ed-53198092cfa5 bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['78dc3557-4401-4e68-b797-8af5d01655e7', 'f475abc8-1ecb-46b5-aee7-314e00187e8e']#033[00m Oct 14 06:19:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 193 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 51 KiB/s wr, 30 op/s Oct 14 06:19:43 localhost nova_compute[295778]: 2025-10-14 10:19:43.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa7d1748-1511-43e0-8f5a-f79700ee12aa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7d1748-1511-43e0-8f5a-f79700ee12aa/.meta.tmp' Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7d1748-1511-43e0-8f5a-f79700ee12aa/.meta.tmp' to config b'/volumes/_nogroup/aa7d1748-1511-43e0-8f5a-f79700ee12aa/.meta' Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa7d1748-1511-43e0-8f5a-f79700ee12aa", "format": "json"}]: dispatch Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07 Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 37 KiB/s wr, 7 op/s Oct 14 06:19:46 localhost nova_compute[295778]: 2025-10-14 10:19:46.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:46 localhost neutron_sriov_agent[263389]: 2025-10-14 10:19:46.316 2 INFO neutron.agent.securitygroups_rpc [None req-33c7cc6c-c9a0-4ce3-9ef1-1e7086da595b bbbcd088abe94518b01a8b1085998690 2cffabfb0ecf4b5d91a7a63dd17a370a - - default default] Security group member updated ['29ffe3b6-a0bd-4faa-ab66-a0c74c1b5533']#033[00m Oct 14 06:19:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "aa7d1748-1511-43e0-8f5a-f79700ee12aa", "new_size": 2147483648, "format": "json"}]: dispatch Oct 14 06:19:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 37 KiB/s wr, 7 op/s Oct 14 06:19:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "tenant_id": "4d12c8bb835544c791c95609a68ae6d3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:47 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1069349725 with tenant 4d12c8bb835544c791c95609a68ae6d3 Oct 14 06:19:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume authorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, tenant_id:4d12c8bb835544c791c95609a68ae6d3, vol_name:cephfs) < "" Oct 14 06:19:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1069349725", "caps": ["mds", "allow rw path=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07", "osd", "allow rw pool=manila_data namespace=fsvolumens_ce754065-58e4-49a9-a603-4440ef4b311b", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:48 localhost nova_compute[295778]: 2025-10-14 10:19:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 193 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 37 KiB/s wr, 7 op/s Oct 14 06:19:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:19:49 localhost podman[343182]: 2025-10-14 10:19:49.546389939 +0000 UTC m=+0.086426350 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:19:49 localhost podman[343182]: 2025-10-14 10:19:49.559695853 +0000 UTC m=+0.099732274 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:19:49 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:19:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4029ebee-9d6d-4f6d-a493-3c2747a51c66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:19:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4029ebee-9d6d-4f6d-a493-3c2747a51c66/.meta.tmp' Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4029ebee-9d6d-4f6d-a493-3c2747a51c66/.meta.tmp' to config b'/volumes/_nogroup/4029ebee-9d6d-4f6d-a493-3c2747a51c66/.meta' Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:19:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4029ebee-9d6d-4f6d-a493-3c2747a51c66", "format": "json"}]: dispatch Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:19:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:19:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:49 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa7d1748-1511-43e0-8f5a-f79700ee12aa", "format": "json"}]: dispatch Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:50.544+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7d1748-1511-43e0-8f5a-f79700ee12aa' of type subvolume Oct 14 06:19:50 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7d1748-1511-43e0-8f5a-f79700ee12aa' of type subvolume Oct 14 06:19:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa7d1748-1511-43e0-8f5a-f79700ee12aa", "force": true, "format": "json"}]: dispatch Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa7d1748-1511-43e0-8f5a-f79700ee12aa'' moved to trashcan Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7d1748-1511-43e0-8f5a-f79700ee12aa, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} v 0) Oct 14 06:19:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} v 0) Oct 14 06:19:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:50 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume deauthorize, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "auth_id": "tempest-cephx-id-1069349725", "format": "json"}]: dispatch Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1069349725, client_metadata.root=/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b/064df9c5-e20e-4319-95d4-91217c164f07 Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1069349725, format:json, prefix:fs subvolume evict, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:51 localhost nova_compute[295778]: 2025-10-14 10:19:51.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 194 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 269 B/s rd, 47 KiB/s wr, 10 op/s Oct 14 06:19:51 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1069349725", "format": "json"} : dispatch Oct 14 06:19:51 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"} : dispatch Oct 14 06:19:51 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1069349725"}]': finished Oct 14 06:19:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/.meta.tmp' Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/.meta.tmp' to config b'/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/.meta' Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "format": "json"}]: dispatch Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:52 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f9fec4a-d1ca-4cbd-920f-f64bec7d897a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f9fec4a-d1ca-4cbd-920f-f64bec7d897a/.meta.tmp' Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f9fec4a-d1ca-4cbd-920f-f64bec7d897a/.meta.tmp' to config b'/volumes/_nogroup/9f9fec4a-d1ca-4cbd-920f-f64bec7d897a/.meta' Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f9fec4a-d1ca-4cbd-920f-f64bec7d897a", "format": "json"}]: dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 194 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 37 KiB/s wr, 7 op/s Oct 14 06:19:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "da130deb-deba-4ca1-a774-8522c10404b8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/da130deb-deba-4ca1-a774-8522c10404b8/.meta.tmp' Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/da130deb-deba-4ca1-a774-8522c10404b8/.meta.tmp' to config b'/volumes/_nogroup/da130deb-deba-4ca1-a774-8522c10404b8/.meta' Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "da130deb-deba-4ca1-a774-8522c10404b8", "format": "json"}]: dispatch Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:19:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:19:53 localhost nova_compute[295778]: 2025-10-14 10:19:53.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "format": "json"}]: dispatch Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ce754065-58e4-49a9-a603-4440ef4b311b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ce754065-58e4-49a9-a603-4440ef4b311b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:54 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:54.656+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce754065-58e4-49a9-a603-4440ef4b311b' of type subvolume Oct 14 06:19:54 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce754065-58e4-49a9-a603-4440ef4b311b' of type subvolume Oct 14 06:19:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce754065-58e4-49a9-a603-4440ef4b311b", "force": true, "format": "json"}]: dispatch Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ce754065-58e4-49a9-a603-4440ef4b311b'' moved to trashcan Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce754065-58e4-49a9-a603-4440ef4b311b, vol_name:cephfs) < "" Oct 14 06:19:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:19:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 194 MiB data, 971 MiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 64 KiB/s wr, 15 op/s Oct 14 06:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:19:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "auth_id": "tempest-cephx-id-134128418", "tenant_id": "c4c3ea536a214a16b479c6c6f4d33dc3", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:19:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume authorize, sub_name:7464b515-4b76-441d-b135-e760267664f0, tenant_id:c4c3ea536a214a16b479c6c6f4d33dc3, vol_name:cephfs) < "" Oct 14 06:19:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} v 0) Oct 14 06:19:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} : dispatch Oct 14 06:19:55 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-134128418 with tenant c4c3ea536a214a16b479c6c6f4d33dc3 Oct 14 06:19:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-134128418", "caps": ["mds", "allow rw path=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_7464b515-4b76-441d-b135-e760267664f0", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:19:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-134128418", "caps": ["mds", "allow rw path=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_7464b515-4b76-441d-b135-e760267664f0", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-134128418", "caps": ["mds", "allow rw path=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_7464b515-4b76-441d-b135-e760267664f0", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:55 localhost podman[343203]: 2025-10-14 10:19:55.549270388 +0000 UTC m=+0.088872715 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:19:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume authorize, sub_name:7464b515-4b76-441d-b135-e760267664f0, tenant_id:c4c3ea536a214a16b479c6c6f4d33dc3, vol_name:cephfs) < "" Oct 14 06:19:55 localhost podman[343204]: 2025-10-14 10:19:55.606000847 +0000 UTC m=+0.142152133 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:19:55 localhost podman[343204]: 2025-10-14 10:19:55.618406427 +0000 UTC m=+0.154557763 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:19:55 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:19:55 localhost podman[343203]: 2025-10-14 10:19:55.63314992 +0000 UTC m=+0.172752247 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Oct 14 06:19:55 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:19:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} : dispatch Oct 14 06:19:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-134128418", "caps": ["mds", "allow rw path=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_7464b515-4b76-441d-b135-e760267664f0", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:19:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-134128418", "caps": ["mds", "allow rw path=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_7464b515-4b76-441d-b135-e760267664f0", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "auth_id": "tempest-cephx-id-134128418", "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume deauthorize, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost nova_compute[295778]: 2025-10-14 10:19:56.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} v 0) Oct 14 06:19:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} : dispatch Oct 14 06:19:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-134128418"} v 0) Oct 14 06:19:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-134128418"} : dispatch Oct 14 06:19:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-134128418"}]': finished Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume deauthorize, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "auth_id": "tempest-cephx-id-134128418", "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume evict, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-134128418, client_metadata.root=/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0/d006bc5a-0512-431c-9db5-8c6c5ce619a9 Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-134128418, format:json, prefix:fs subvolume evict, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7464b515-4b76-441d-b135-e760267664f0", "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7464b515-4b76-441d-b135-e760267664f0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7464b515-4b76-441d-b135-e760267664f0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:56.360+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7464b515-4b76-441d-b135-e760267664f0' of type subvolume Oct 14 06:19:56 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7464b515-4b76-441d-b135-e760267664f0' of type subvolume Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7464b515-4b76-441d-b135-e760267664f0", "force": true, "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7464b515-4b76-441d-b135-e760267664f0'' moved to trashcan Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7464b515-4b76-441d-b135-e760267664f0, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f9fec4a-d1ca-4cbd-920f-f64bec7d897a", "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:56.855+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9fec4a-d1ca-4cbd-920f-f64bec7d897a' of type subvolume Oct 14 06:19:56 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9fec4a-d1ca-4cbd-920f-f64bec7d897a' of type subvolume Oct 14 06:19:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f9fec4a-d1ca-4cbd-920f-f64bec7d897a", "force": true, "format": "json"}]: dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:19:56 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/312361658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:19:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:19:56 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/312361658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f9fec4a-d1ca-4cbd-920f-f64bec7d897a'' moved to trashcan Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9fec4a-d1ca-4cbd-920f-f64bec7d897a, vol_name:cephfs) < "" Oct 14 06:19:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-134128418", "format": "json"} : dispatch Oct 14 06:19:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-134128418"} : dispatch Oct 14 06:19:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-134128418"}]': finished Oct 14 06:19:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:19:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/466625053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:19:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:19:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/466625053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:19:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "da130deb-deba-4ca1-a774-8522c10404b8", "format": "json"}]: dispatch Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:da130deb-deba-4ca1-a774-8522c10404b8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:da130deb-deba-4ca1-a774-8522c10404b8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:19:57 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:19:57.097+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da130deb-deba-4ca1-a774-8522c10404b8' of type subvolume Oct 14 06:19:57 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'da130deb-deba-4ca1-a774-8522c10404b8' of type subvolume Oct 14 06:19:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "da130deb-deba-4ca1-a774-8522c10404b8", "force": true, "format": "json"}]: dispatch Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/da130deb-deba-4ca1-a774-8522c10404b8'' moved to trashcan Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:19:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:da130deb-deba-4ca1-a774-8522c10404b8, vol_name:cephfs) < "" Oct 14 06:19:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 194 MiB data, 971 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 41 KiB/s wr, 10 op/s Oct 14 06:19:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:57.643 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:19:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:57.643 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:19:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:19:57.644 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:19:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:19:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1918635980' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:19:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:19:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1918635980' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:19:58 localhost nova_compute[295778]: 2025-10-14 10:19:58.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:19:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 194 MiB data, 971 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 41 KiB/s wr, 10 op/s Oct 14 06:19:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:00 localhost ceph-mon[307093]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 14 06:20:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "80237de4-4339-48a8-80d2-a475524b4bbc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mon[307093]: overall HEALTH_OK Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/80237de4-4339-48a8-80d2-a475524b4bbc/.meta.tmp' Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/80237de4-4339-48a8-80d2-a475524b4bbc/.meta.tmp' to config b'/volumes/_nogroup/80237de4-4339-48a8-80d2-a475524b4bbc/.meta' Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "80237de4-4339-48a8-80d2-a475524b4bbc", "format": "json"}]: dispatch Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b52f3c5b-f5a0-41ca-a712-1f30706c1bea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b52f3c5b-f5a0-41ca-a712-1f30706c1bea/.meta.tmp' Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b52f3c5b-f5a0-41ca-a712-1f30706c1bea/.meta.tmp' to config b'/volumes/_nogroup/b52f3c5b-f5a0-41ca-a712-1f30706c1bea/.meta' Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b52f3c5b-f5a0-41ca-a712-1f30706c1bea", "format": "json"}]: dispatch Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:20:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:00 localhost podman[343245]: 2025-10-14 10:20:00.553089079 +0000 UTC m=+0.091403693 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 14 06:20:00 localhost podman[343245]: 2025-10-14 10:20:00.567105642 +0000 UTC m=+0.105420296 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:20:00 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:20:00 localhost podman[246584]: time="2025-10-14T10:20:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:20:00 localhost podman[246584]: @ - - [14/Oct/2025:10:20:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:20:00 localhost podman[246584]: @ - - [14/Oct/2025:10:20:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18907 "" "Go-http-client/1.1" Oct 14 06:20:00 localhost podman[343246]: 2025-10-14 10:20:00.70764419 +0000 UTC m=+0.242741889 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:20:00 localhost podman[343246]: 2025-10-14 10:20:00.723176774 +0000 UTC m=+0.258274513 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS) Oct 14 06:20:00 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:20:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e192 do_prune osdmap full prune enabled Oct 14 06:20:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e193 e193: 6 total, 6 up, 6 in Oct 14 06:20:01 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e193: 6 total, 6 up, 6 in Oct 14 06:20:01 localhost nova_compute[295778]: 2025-10-14 10:20:01.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 194 MiB data, 989 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 70 KiB/s wr, 82 op/s Oct 14 06:20:03 localhost openstack_network_exporter[248748]: ERROR 10:20:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:20:03 localhost openstack_network_exporter[248748]: ERROR 10:20:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:20:03 localhost openstack_network_exporter[248748]: ERROR 10:20:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:20:03 localhost openstack_network_exporter[248748]: ERROR 10:20:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:20:03 localhost openstack_network_exporter[248748]: Oct 14 06:20:03 localhost openstack_network_exporter[248748]: ERROR 10:20:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:20:03 localhost openstack_network_exporter[248748]: Oct 14 06:20:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 194 MiB data, 989 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 70 KiB/s wr, 82 op/s Oct 14 06:20:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b52f3c5b-f5a0-41ca-a712-1f30706c1bea", "format": "json"}]: dispatch Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:03.609+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b52f3c5b-f5a0-41ca-a712-1f30706c1bea' of type subvolume Oct 14 06:20:03 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b52f3c5b-f5a0-41ca-a712-1f30706c1bea' of type subvolume Oct 14 06:20:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b52f3c5b-f5a0-41ca-a712-1f30706c1bea", "force": true, "format": "json"}]: dispatch Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b52f3c5b-f5a0-41ca-a712-1f30706c1bea'' moved to trashcan Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b52f3c5b-f5a0-41ca-a712-1f30706c1bea, vol_name:cephfs) < "" Oct 14 06:20:03 localhost nova_compute[295778]: 2025-10-14 10:20:03.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "80237de4-4339-48a8-80d2-a475524b4bbc", "format": "json"}]: dispatch Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:80237de4-4339-48a8-80d2-a475524b4bbc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:80237de4-4339-48a8-80d2-a475524b4bbc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:04.181+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '80237de4-4339-48a8-80d2-a475524b4bbc' of type subvolume Oct 14 06:20:04 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '80237de4-4339-48a8-80d2-a475524b4bbc' of type subvolume Oct 14 06:20:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "80237de4-4339-48a8-80d2-a475524b4bbc", "force": true, "format": "json"}]: dispatch Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/80237de4-4339-48a8-80d2-a475524b4bbc'' moved to trashcan Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:80237de4-4339-48a8-80d2-a475524b4bbc, vol_name:cephfs) < "" Oct 14 06:20:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 194 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 56 KiB/s wr, 121 op/s Oct 14 06:20:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e193 do_prune osdmap full prune enabled Oct 14 06:20:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e194 e194: 6 total, 6 up, 6 in Oct 14 06:20:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e194: 6 total, 6 up, 6 in Oct 14 06:20:06 localhost nova_compute[295778]: 2025-10-14 10:20:06.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "06af7c87-8644-4fe7-94f1-8500932bfb03", "format": "json"}]: dispatch Oct 14 06:20:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 194 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 105 KiB/s rd, 71 KiB/s wr, 152 op/s Oct 14 06:20:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9fd47278-0915-4a63-a058-a734b612fd46", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9fd47278-0915-4a63-a058-a734b612fd46/.meta.tmp' Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9fd47278-0915-4a63-a058-a734b612fd46/.meta.tmp' to config b'/volumes/_nogroup/9fd47278-0915-4a63-a058-a734b612fd46/.meta' Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9fd47278-0915-4a63-a058-a734b612fd46", "format": "json"}]: dispatch Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:07 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:20:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:20:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:20:08 localhost podman[343284]: 2025-10-14 10:20:08.549834533 +0000 UTC m=+0.089433181 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, version=9.6, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal) Oct 14 06:20:08 localhost podman[343284]: 2025-10-14 10:20:08.593110484 +0000 UTC m=+0.132709112 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal) Oct 14 06:20:08 localhost podman[343285]: 2025-10-14 10:20:08.606214692 +0000 UTC m=+0.140026006 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 14 06:20:08 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:20:08 localhost podman[343286]: 2025-10-14 10:20:08.675623638 +0000 UTC m=+0.205798545 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:20:08 localhost podman[343286]: 2025-10-14 10:20:08.687102994 +0000 UTC m=+0.217277921 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:20:08 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:20:08 localhost podman[343285]: 2025-10-14 10:20:08.709177911 +0000 UTC m=+0.242989225 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:20:08 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:20:08 localhost nova_compute[295778]: 2025-10-14 10:20:08.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:20:09 Oct 14 06:20:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:20:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:20:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['images', 'vms', 'backups', 'manila_data', '.mgr', 'volumes', 'manila_metadata'] Oct 14 06:20:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/.meta.tmp' Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/.meta.tmp' to config b'/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/.meta' Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "format": "json"}]: dispatch Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:09 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 194 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 23 KiB/s wr, 57 op/s Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:20:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00018838768495254047 of space, bias 4.0, pg target 0.1499565972222222 quantized to 16 (current 16) Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:20:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:20:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "06af7c87-8644-4fe7-94f1-8500932bfb03_b6c281f8-5f42-4356-abd8-66a156dc0317", "force": true, "format": "json"}]: dispatch Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03_b6c281f8-5f42-4356-abd8-66a156dc0317, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03_b6c281f8-5f42-4356-abd8-66a156dc0317, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "06af7c87-8644-4fe7-94f1-8500932bfb03", "force": true, "format": "json"}]: dispatch Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:06af7c87-8644-4fe7-94f1-8500932bfb03, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9fd47278-0915-4a63-a058-a734b612fd46", "format": "json"}]: dispatch Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9fd47278-0915-4a63-a058-a734b612fd46, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9fd47278-0915-4a63-a058-a734b612fd46, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:10.872+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9fd47278-0915-4a63-a058-a734b612fd46' of type subvolume Oct 14 06:20:10 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9fd47278-0915-4a63-a058-a734b612fd46' of type subvolume Oct 14 06:20:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9fd47278-0915-4a63-a058-a734b612fd46", "force": true, "format": "json"}]: dispatch Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9fd47278-0915-4a63-a058-a734b612fd46'' moved to trashcan Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9fd47278-0915-4a63-a058-a734b612fd46, vol_name:cephfs) < "" Oct 14 06:20:11 localhost nova_compute[295778]: 2025-10-14 10:20:11.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 194 MiB data, 994 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 31 KiB/s wr, 74 op/s Oct 14 06:20:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/.meta.tmp' Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/.meta.tmp' to config b'/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/.meta' Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "format": "json"}]: dispatch Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e194 do_prune osdmap full prune enabled Oct 14 06:20:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e195 e195: 6 total, 6 up, 6 in Oct 14 06:20:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e195: 6 total, 6 up, 6 in Oct 14 06:20:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 194 MiB data, 994 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 15 KiB/s wr, 33 op/s Oct 14 06:20:13 localhost nova_compute[295778]: 2025-10-14 10:20:13.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a732a7d-f93f-4405-95cc-73dcb177cd2f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1a732a7d-f93f-4405-95cc-73dcb177cd2f/.meta.tmp' Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1a732a7d-f93f-4405-95cc-73dcb177cd2f/.meta.tmp' to config b'/volumes/_nogroup/1a732a7d-f93f-4405-95cc-73dcb177cd2f/.meta' Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a732a7d-f93f-4405-95cc-73dcb177cd2f", "format": "json"}]: dispatch Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta' Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "format": "json"}]: dispatch Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e195 do_prune osdmap full prune enabled Oct 14 06:20:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e196 e196: 6 total, 6 up, 6 in Oct 14 06:20:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e196: 6 total, 6 up, 6 in Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.028440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215028494, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2166, "num_deletes": 261, "total_data_size": 1978857, "memory_usage": 2022960, "flush_reason": "Manual Compaction"} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215041891, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1926252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32085, "largest_seqno": 34250, "table_properties": {"data_size": 1916576, "index_size": 5929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 24152, "raw_average_key_size": 22, "raw_value_size": 1895830, "raw_average_value_size": 1768, "num_data_blocks": 252, "num_entries": 1072, "num_filter_entries": 1072, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437109, "oldest_key_time": 1760437109, "file_creation_time": 1760437215, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 13498 microseconds, and 5611 cpu microseconds. Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.041936) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1926252 bytes OK Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.041959) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.043583) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.043601) EVENT_LOG_v1 {"time_micros": 1760437215043595, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.043622) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1968990, prev total WAL file size 1968990, number of live WAL files 2. Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.044560) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(1881KB)], [60(16MB)] Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215044646, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 18795325, "oldest_snapshot_seqno": -1} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13422 keys, 17618357 bytes, temperature: kUnknown Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215150493, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 17618357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17541659, "index_size": 42037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33605, "raw_key_size": 362125, "raw_average_key_size": 26, "raw_value_size": 17313046, "raw_average_value_size": 1289, "num_data_blocks": 1562, "num_entries": 13422, "num_filter_entries": 13422, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437215, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.150820) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 17618357 bytes Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.152650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.4 rd, 166.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 16.1 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(18.9) write-amplify(9.1) OK, records in: 13970, records dropped: 548 output_compression: NoCompression Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.152677) EVENT_LOG_v1 {"time_micros": 1760437215152665, "job": 36, "event": "compaction_finished", "compaction_time_micros": 105952, "compaction_time_cpu_micros": 53444, "output_level": 6, "num_output_files": 1, "total_output_size": 17618357, "num_input_records": 13970, "num_output_records": 13422, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215153217, "job": 36, "event": "table_file_deletion", "file_number": 62} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437215155645, "job": 36, "event": "table_file_deletion", "file_number": 60} Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.044413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.155786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.155791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.155794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.155797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:20:15.155800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:20:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 50 KiB/s wr, 62 op/s Oct 14 06:20:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "debb450a-d993-4694-aef4-978f83e0e2e9", "format": "json"}]: dispatch Oct 14 06:20:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "auth_id": "Joe", "tenant_id": "c0a291c12d684e0180079f4c9858e70d", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 14 06:20:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID Joe with tenant c0a291c12d684e0180079f4c9858e70d Oct 14 06:20:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8", "osd", "allow rw pool=manila_data namespace=fsvolumens_c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8", "osd", "allow rw pool=manila_data namespace=fsvolumens_c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8", "osd", "allow rw pool=manila_data namespace=fsvolumens_c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:16 localhost nova_compute[295778]: 2025-10-14 10:20:16.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8", "osd", "allow rw pool=manila_data namespace=fsvolumens_c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8", "osd", "allow rw pool=manila_data namespace=fsvolumens_c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:17 localhost ovn_controller[156286]: 2025-10-14T10:20:17Z|00402|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory Oct 14 06:20:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/.meta.tmp' Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/.meta.tmp' to config b'/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/.meta' Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "format": "json"}]: dispatch Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 50 KiB/s wr, 62 op/s Oct 14 06:20:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "snap_name": "ee86a50d-605c-4131-941e-79a81fefe4fc", "format": "json"}]: dispatch Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a732a7d-f93f-4405-95cc-73dcb177cd2f", "format": "json"}]: dispatch Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:18.627+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a732a7d-f93f-4405-95cc-73dcb177cd2f' of type subvolume Oct 14 06:20:18 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a732a7d-f93f-4405-95cc-73dcb177cd2f' of type subvolume Oct 14 06:20:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a732a7d-f93f-4405-95cc-73dcb177cd2f", "force": true, "format": "json"}]: dispatch Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1a732a7d-f93f-4405-95cc-73dcb177cd2f'' moved to trashcan Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a732a7d-f93f-4405-95cc-73dcb177cd2f, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e196 do_prune osdmap full prune enabled Oct 14 06:20:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e197 e197: 6 total, 6 up, 6 in Oct 14 06:20:18 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e197: 6 total, 6 up, 6 in Oct 14 06:20:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:19 localhost nova_compute[295778]: 2025-10-14 10:20:19.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/.meta.tmp' Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/.meta.tmp' to config b'/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/.meta' Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "format": "json"}]: dispatch Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:19 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "5929022c-b34d-4cb0-a7ed-8d30e380048c", "format": "json"}]: dispatch Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 40 KiB/s wr, 33 op/s Oct 14 06:20:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e197 do_prune osdmap full prune enabled Oct 14 06:20:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e198 e198: 6 total, 6 up, 6 in Oct 14 06:20:19 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e198: 6 total, 6 up, 6 in Oct 14 06:20:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e198 do_prune osdmap full prune enabled Oct 14 06:20:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e199 e199: 6 total, 6 up, 6 in Oct 14 06:20:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e199: 6 total, 6 up, 6 in Oct 14 06:20:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:20:20 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve49", "tenant_id": "cfaaf8f0dc5544b1a4a7ff1c7da02ab8", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:20 localhost podman[343350]: 2025-10-14 10:20:20.560100181 +0000 UTC m=+0.094482164 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:20:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Oct 14 06:20:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 14 06:20:20 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID eve49 with tenant cfaaf8f0dc5544b1a4a7ff1c7da02ab8 Oct 14 06:20:20 localhost podman[343350]: 2025-10-14 10:20:20.576660362 +0000 UTC m=+0.111042395 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:20:20 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:20:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:20 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 14 06:20:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:21 localhost nova_compute[295778]: 2025-10-14 10:20:21.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 195 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 65 KiB/s wr, 101 op/s Oct 14 06:20:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e199 do_prune osdmap full prune enabled Oct 14 06:20:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e200 e200: 6 total, 6 up, 6 in Oct 14 06:20:21 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e200: 6 total, 6 up, 6 in Oct 14 06:20:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "snap_name": "ee86a50d-605c-4131-941e-79a81fefe4fc_073e5b25-7e88-456d-bcfe-1edd179d256d", "force": true, "format": "json"}]: dispatch Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc_073e5b25-7e88-456d-bcfe-1edd179d256d, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta' Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc_073e5b25-7e88-456d-bcfe-1edd179d256d, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "snap_name": "ee86a50d-605c-4131-941e-79a81fefe4fc", "force": true, "format": "json"}]: dispatch Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta.tmp' to config b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb/.meta' Oct 14 06:20:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee86a50d-605c-4131-941e-79a81fefe4fc, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "26474148-16eb-44ef-b9e3-751a008e840b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/69b55bb8-8324-4c30-842e-f30a5722e00f/26474148-16eb-44ef-b9e3-751a008e840b/.meta.tmp' Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/69b55bb8-8324-4c30-842e-f30a5722e00f/26474148-16eb-44ef-b9e3-751a008e840b/.meta.tmp' to config b'/volumes/69b55bb8-8324-4c30-842e-f30a5722e00f/26474148-16eb-44ef-b9e3-751a008e840b/.meta' Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "26474148-16eb-44ef-b9e3-751a008e840b", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolume getpath, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolume getpath, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "Joe", "tenant_id": "7d5ad1bff55849678b704b0426176444", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 14 06:20:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:22.576+0000 7ff5d7f75640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Oct 14 06:20:22 localhost ceph-mgr[300442]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Oct 14 06:20:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e200 do_prune osdmap full prune enabled Oct 14 06:20:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e201 e201: 6 total, 6 up, 6 in Oct 14 06:20:22 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e201: 6 total, 6 up, 6 in Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "f026872b-2716-44d6-ae5f-5cca2b5cc7dd", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6630d4ec-cb59-4e46-9de3-2b13991fefe9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6630d4ec-cb59-4e46-9de3-2b13991fefe9/.meta.tmp' Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6630d4ec-cb59-4e46-9de3-2b13991fefe9/.meta.tmp' to config b'/volumes/_nogroup/6630d4ec-cb59-4e46-9de3-2b13991fefe9/.meta' Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6630d4ec-cb59-4e46-9de3-2b13991fefe9", "format": "json"}]: dispatch Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 195 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 101 KiB/s rd, 98 KiB/s wr, 152 op/s Oct 14 06:20:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e201 do_prune osdmap full prune enabled Oct 14 06:20:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e202 e202: 6 total, 6 up, 6 in Oct 14 06:20:23 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e202: 6 total, 6 up, 6 in Oct 14 06:20:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve48", "tenant_id": "cfaaf8f0dc5544b1a4a7ff1c7da02ab8", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Oct 14 06:20:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 14 06:20:23 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID eve48 with tenant cfaaf8f0dc5544b1a4a7ff1c7da02ab8 Oct 14 06:20:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:24 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:24 localhost nova_compute[295778]: 2025-10-14 10:20:24.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 14 06:20:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:24 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "format": "json"}]: dispatch Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:24 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '793d0dc0-9028-4689-9a1f-ad45f585fbfb' of type subvolume Oct 14 06:20:24 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:24.871+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '793d0dc0-9028-4689-9a1f-ad45f585fbfb' of type subvolume Oct 14 06:20:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "793d0dc0-9028-4689-9a1f-ad45f585fbfb", "force": true, "format": "json"}]: dispatch Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/793d0dc0-9028-4689-9a1f-ad45f585fbfb'' moved to trashcan Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:793d0dc0-9028-4689-9a1f-ad45f585fbfb, vol_name:cephfs) < "" Oct 14 06:20:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e202 do_prune osdmap full prune enabled Oct 14 06:20:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e203 e203: 6 total, 6 up, 6 in Oct 14 06:20:25 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e203: 6 total, 6 up, 6 in Oct 14 06:20:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 205 KiB/s rd, 127 KiB/s wr, 294 op/s Oct 14 06:20:25 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "tempest-cephx-id-1765384422", "tenant_id": "7d5ad1bff55849678b704b0426176444", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume authorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} v 0) Oct 14 06:20:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} : dispatch Oct 14 06:20:25 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID tempest-cephx-id-1765384422 with tenant 7d5ad1bff55849678b704b0426176444 Oct 14 06:20:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1765384422", "caps": ["mds", "allow rw path=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9", "osd", "allow rw pool=manila_data namespace=fsvolumens_0eb88f0a-5596-479e-940f-8e6a2102a41a", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1765384422", "caps": ["mds", "allow rw path=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9", "osd", "allow rw pool=manila_data namespace=fsvolumens_0eb88f0a-5596-479e-940f-8e6a2102a41a", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1765384422", "caps": ["mds", "allow rw path=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9", "osd", "allow rw pool=manila_data namespace=fsvolumens_0eb88f0a-5596-479e-940f-8e6a2102a41a", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume authorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} : dispatch Oct 14 06:20:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1765384422", "caps": ["mds", "allow rw path=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9", "osd", "allow rw pool=manila_data namespace=fsvolumens_0eb88f0a-5596-479e-940f-8e6a2102a41a", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1765384422", "caps": ["mds", "allow rw path=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9", "osd", "allow rw pool=manila_data namespace=fsvolumens_0eb88f0a-5596-479e-940f-8e6a2102a41a", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:26.351 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:20:26 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:26.352 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:26 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "8e7bbe4c-5085-477b-8548-640a74e704cd", "format": "json"}]: dispatch Oct 14 06:20:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:20:26 localhost podman[343371]: 2025-10-14 10:20:26.564573924 +0000 UTC m=+0.092726628 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:20:26 localhost podman[343370]: 2025-10-14 10:20:26.532809329 +0000 UTC m=+0.070383293 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 14 06:20:26 localhost podman[343371]: 2025-10-14 10:20:26.60311734 +0000 UTC m=+0.131270064 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:20:26 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:20:26 localhost podman[343370]: 2025-10-14 10:20:26.618463727 +0000 UTC m=+0.156037681 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:20:26 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.929 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.930 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.931 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:20:26 localhost nova_compute[295778]: 2025-10-14 10:20:26.931 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:20:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 143 KiB/s rd, 88 KiB/s wr, 204 op/s Oct 14 06:20:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve48", "format": "json"}]: dispatch Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.410 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:20:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Oct 14 06:20:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 14 06:20:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Oct 14 06:20:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 14 06:20:27 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve48", "format": "json"}]: dispatch Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634 Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.618 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.620 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11371MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.621 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.621 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.691 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.692 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:20:27 localhost nova_compute[295778]: 2025-10-14 10:20:27.711 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:20:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6630d4ec-cb59-4e46-9de3-2b13991fefe9", "format": "json"}]: dispatch Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:27 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6630d4ec-cb59-4e46-9de3-2b13991fefe9' of type subvolume Oct 14 06:20:27 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:27.714+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6630d4ec-cb59-4e46-9de3-2b13991fefe9' of type subvolume Oct 14 06:20:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6630d4ec-cb59-4e46-9de3-2b13991fefe9", "force": true, "format": "json"}]: dispatch Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6630d4ec-cb59-4e46-9de3-2b13991fefe9'' moved to trashcan Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6630d4ec-cb59-4e46-9de3-2b13991fefe9, vol_name:cephfs) < "" Oct 14 06:20:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 14 06:20:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 14 06:20:28 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Oct 14 06:20:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:20:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3125668163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:20:28 localhost nova_compute[295778]: 2025-10-14 10:20:28.183 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:20:28 localhost nova_compute[295778]: 2025-10-14 10:20:28.191 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:20:28 localhost nova_compute[295778]: 2025-10-14 10:20:28.206 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:20:28 localhost nova_compute[295778]: 2025-10-14 10:20:28.208 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:20:28 localhost nova_compute[295778]: 2025-10-14 10:20:28.209 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:20:29 localhost nova_compute[295778]: 2025-10-14 10:20:29.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e203 do_prune osdmap full prune enabled Oct 14 06:20:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e204 e204: 6 total, 6 up, 6 in Oct 14 06:20:29 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e204: 6 total, 6 up, 6 in Oct 14 06:20:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "Joe", "format": "json"}]: dispatch Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '0eb88f0a-5596-479e-940f-8e6a2102a41a' Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "Joe", "format": "json"}]: dispatch Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9 Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "snap_name": "7bf0447e-0945-4562-83ec-63e5700d4db0", "force": true, "format": "json"}]: dispatch Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolumegroup snapshot rm, snap_name:7bf0447e-0945-4562-83ec-63e5700d4db0, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolumegroup snapshot rm, snap_name:7bf0447e-0945-4562-83ec-63e5700d4db0, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 85 KiB/s wr, 196 op/s Oct 14 06:20:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "133b25b2-e9be-4ced-80d4-1ad4575fad48", "format": "json"}]: dispatch Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e204 do_prune osdmap full prune enabled Oct 14 06:20:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e205 e205: 6 total, 6 up, 6 in Oct 14 06:20:30 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e205: 6 total, 6 up, 6 in Oct 14 06:20:30 localhost nova_compute[295778]: 2025-10-14 10:20:30.209 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:30 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:30.356 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:20:30 localhost podman[246584]: time="2025-10-14T10:20:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:20:30 localhost podman[246584]: @ - - [14/Oct/2025:10:20:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:20:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve47", "tenant_id": "cfaaf8f0dc5544b1a4a7ff1c7da02ab8", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Oct 14 06:20:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 14 06:20:30 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID eve47 with tenant cfaaf8f0dc5544b1a4a7ff1c7da02ab8 Oct 14 06:20:30 localhost podman[246584]: @ - - [14/Oct/2025:10:20:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18908 "" "Go-http-client/1.1" Oct 14 06:20:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, tenant_id:cfaaf8f0dc5544b1a4a7ff1c7da02ab8, vol_name:cephfs) < "" Oct 14 06:20:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e205 do_prune osdmap full prune enabled Oct 14 06:20:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 14 06:20:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:31 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634", "osd", "allow rw pool=manila_data namespace=fsvolumens_0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e206 e206: 6 total, 6 up, 6 in Oct 14 06:20:31 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e206: 6 total, 6 up, 6 in Oct 14 06:20:31 localhost nova_compute[295778]: 2025-10-14 10:20:31.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 196 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 71 KiB/s wr, 32 op/s Oct 14 06:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:20:31 localhost podman[343457]: 2025-10-14 10:20:31.54489586 +0000 UTC m=+0.085247328 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:20:31 localhost podman[343457]: 2025-10-14 10:20:31.555487332 +0000 UTC m=+0.095838840 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 14 06:20:31 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:20:31 localhost podman[343458]: 2025-10-14 10:20:31.652118722 +0000 UTC m=+0.186772249 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:20:31 localhost podman[343458]: 2025-10-14 10:20:31.664670287 +0000 UTC m=+0.199323814 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd) Oct 14 06:20:31 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:20:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "tempest-cephx-id-1765384422", "format": "json"}]: dispatch Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume deauthorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} v 0) Oct 14 06:20:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} : dispatch Oct 14 06:20:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1765384422"} v 0) Oct 14 06:20:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1765384422"} : dispatch Oct 14 06:20:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1765384422"}]': finished Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume deauthorize, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "auth_id": "tempest-cephx-id-1765384422", "format": "json"}]: dispatch Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume evict, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1765384422, client_metadata.root=/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a/ecf8dc76-3b91-46d6-9446-c258a56234b9 Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1765384422, format:json, prefix:fs subvolume evict, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "26474148-16eb-44ef-b9e3-751a008e840b", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "format": "json"}]: dispatch Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:26474148-16eb-44ef-b9e3-751a008e840b, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:26474148-16eb-44ef-b9e3-751a008e840b, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:32.696+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26474148-16eb-44ef-b9e3-751a008e840b' of type subvolume Oct 14 06:20:32 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26474148-16eb-44ef-b9e3-751a008e840b' of type subvolume Oct 14 06:20:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "26474148-16eb-44ef-b9e3-751a008e840b", "force": true, "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "format": "json"}]: dispatch Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolume rm, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/69b55bb8-8324-4c30-842e-f30a5722e00f/26474148-16eb-44ef-b9e3-751a008e840b'' moved to trashcan Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolume rm, sub_name:26474148-16eb-44ef-b9e3-751a008e840b, vol_name:cephfs) < "" Oct 14 06:20:32 localhost nova_compute[295778]: 2025-10-14 10:20:32.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:32 localhost nova_compute[295778]: 2025-10-14 10:20:32.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:20:32 localhost nova_compute[295778]: 2025-10-14 10:20:32.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:20:32 localhost nova_compute[295778]: 2025-10-14 10:20:32.919 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:20:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4029ebee-9d6d-4f6d-a493-3c2747a51c66", "format": "json"}]: dispatch Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:33.013+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4029ebee-9d6d-4f6d-a493-3c2747a51c66' of type subvolume Oct 14 06:20:33 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4029ebee-9d6d-4f6d-a493-3c2747a51c66' of type subvolume Oct 14 06:20:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4029ebee-9d6d-4f6d-a493-3c2747a51c66", "force": true, "format": "json"}]: dispatch Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4029ebee-9d6d-4f6d-a493-3c2747a51c66'' moved to trashcan Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4029ebee-9d6d-4f6d-a493-3c2747a51c66, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1765384422", "format": "json"} : dispatch Oct 14 06:20:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1765384422"} : dispatch Oct 14 06:20:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1765384422"}]': finished Oct 14 06:20:33 localhost openstack_network_exporter[248748]: ERROR 10:20:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:20:33 localhost openstack_network_exporter[248748]: ERROR 10:20:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:20:33 localhost openstack_network_exporter[248748]: Oct 14 06:20:33 localhost openstack_network_exporter[248748]: ERROR 10:20:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:20:33 localhost openstack_network_exporter[248748]: Oct 14 06:20:33 localhost openstack_network_exporter[248748]: ERROR 10:20:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:20:33 localhost openstack_network_exporter[248748]: ERROR 10:20:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:20:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 196 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 71 KiB/s wr, 32 op/s Oct 14 06:20:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "133b25b2-e9be-4ced-80d4-1ad4575fad48_ab416c18-03b5-4b9b-bb00-22f851f74493", "force": true, "format": "json"}]: dispatch Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48_ab416c18-03b5-4b9b-bb00-22f851f74493, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48_ab416c18-03b5-4b9b-bb00-22f851f74493, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "133b25b2-e9be-4ced-80d4-1ad4575fad48", "force": true, "format": "json"}]: dispatch Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:133b25b2-e9be-4ced-80d4-1ad4575fad48, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:20:33 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:20:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:20:33 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:20:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:20:33 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:20:33 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 0087eb55-29a4-4315-a558-d0cae4528e9c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:20:33 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 0087eb55-29a4-4315-a558-d0cae4528e9c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:20:33 localhost ceph-mgr[300442]: [progress INFO root] Completed event 0087eb55-29a4-4315-a558-d0cae4528e9c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:20:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:20:33 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:20:34 localhost nova_compute[295778]: 2025-10-14 10:20:34.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:20:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:20:34 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:20:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:20:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve47", "format": "json"}]: dispatch Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:20:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Oct 14 06:20:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 14 06:20:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Oct 14 06:20:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 14 06:20:34 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve47", "format": "json"}]: dispatch Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634 Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:34 localhost nova_compute[295778]: 2025-10-14 10:20:34.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:34 localhost nova_compute[295778]: 2025-10-14 10:20:34.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:34 localhost nova_compute[295778]: 2025-10-14 10:20:34.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 196 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 168 KiB/s wr, 79 op/s Oct 14 06:20:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:20:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 14 06:20:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 14 06:20:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Oct 14 06:20:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "69b55bb8-8324-4c30-842e-f30a5722e00f", "force": true, "format": "json"}]: dispatch Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:69b55bb8-8324-4c30-842e-f30a5722e00f, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:20:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "auth_id": "Joe", "format": "json"}]: dispatch Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 14 06:20:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Oct 14 06:20:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 14 06:20:35 localhost nova_compute[295778]: 2025-10-14 10:20:35.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:35 localhost nova_compute[295778]: 2025-10-14 10:20:35.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:20:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "auth_id": "Joe", "format": "json"}]: dispatch Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629/b0a90cce-d7c2-42b6-a80f-3807b51966b8 Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:20:36 localhost nova_compute[295778]: 2025-10-14 10:20:36.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:36 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 14 06:20:36 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 14 06:20:36 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Oct 14 06:20:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "8e7bbe4c-5085-477b-8548-640a74e704cd_d953094c-6c8e-47ea-aa6e-16c696fa228c", "force": true, "format": "json"}]: dispatch Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd_d953094c-6c8e-47ea-aa6e-16c696fa228c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd_d953094c-6c8e-47ea-aa6e-16c696fa228c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "8e7bbe4c-5085-477b-8548-640a74e704cd", "force": true, "format": "json"}]: dispatch Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8e7bbe4c-5085-477b-8548-640a74e704cd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 196 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 133 KiB/s wr, 62 op/s Oct 14 06:20:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e206 do_prune osdmap full prune enabled Oct 14 06:20:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e207 e207: 6 total, 6 up, 6 in Oct 14 06:20:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e207: 6 total, 6 up, 6 in Oct 14 06:20:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e207 do_prune osdmap full prune enabled Oct 14 06:20:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e208 e208: 6 total, 6 up, 6 in Oct 14 06:20:38 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e208: 6 total, 6 up, 6 in Oct 14 06:20:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve49", "format": "json"}]: dispatch Oct 14 06:20:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:38 localhost nova_compute[295778]: 2025-10-14 10:20:38.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Oct 14 06:20:38 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 14 06:20:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Oct 14 06:20:38 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 14 06:20:38 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "auth_id": "eve49", "format": "json"}]: dispatch Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f/a86d1791-a890-4116-b06d-8abaae958634 Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:39 localhost nova_compute[295778]: 2025-10-14 10:20:39.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:20:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "format": "json"}]: dispatch Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:39.287+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f' of type subvolume Oct 14 06:20:39 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f' of type subvolume Oct 14 06:20:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f", "force": true, "format": "json"}]: dispatch Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f'' moved to trashcan Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d9f7a76-cbf2-46e5-ab0a-bc125ebb379f, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "auth_id": "admin", "tenant_id": "c0a291c12d684e0180079f4c9858e70d", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Oct 14 06:20:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 14 06:20:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:39.359+0000 7ff5d7f75640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 14 06:20:39 localhost ceph-mgr[300442]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 14 06:20:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 196 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 80 KiB/s wr, 38 op/s Oct 14 06:20:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:20:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:20:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:20:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e208 do_prune osdmap full prune enabled Oct 14 06:20:39 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 14 06:20:39 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 14 06:20:39 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Oct 14 06:20:39 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Oct 14 06:20:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e209 e209: 6 total, 6 up, 6 in Oct 14 06:20:39 localhost podman[343585]: 2025-10-14 10:20:39.558514614 +0000 UTC m=+0.088733882 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:20:39 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e209: 6 total, 6 up, 6 in Oct 14 06:20:39 localhost podman[343584]: 2025-10-14 10:20:39.634080864 +0000 UTC m=+0.167912458 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, container_name=openstack_network_exporter) Oct 14 06:20:39 localhost podman[343584]: 2025-10-14 10:20:39.64785159 +0000 UTC m=+0.181683154 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public) Oct 14 06:20:39 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:20:39 localhost podman[343586]: 2025-10-14 10:20:39.73279945 +0000 UTC m=+0.257113481 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:20:39 localhost podman[343585]: 2025-10-14 10:20:39.741954624 +0000 UTC m=+0.272173902 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:20:39 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:20:39 localhost podman[343586]: 2025-10-14 10:20:39.796795223 +0000 UTC m=+0.321109274 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:20:39 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:20:39 localhost nova_compute[295778]: 2025-10-14 10:20:39.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "f026872b-2716-44d6-ae5f-5cca2b5cc7dd_257ec5f5-cac7-43b5-b513-e24d7a6d79d1", "force": true, "format": "json"}]: dispatch Oct 14 06:20:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e209 do_prune osdmap full prune enabled Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd_257ec5f5-cac7-43b5-b513-e24d7a6d79d1, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e210 e210: 6 total, 6 up, 6 in Oct 14 06:20:40 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e210: 6 total, 6 up, 6 in Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd_257ec5f5-cac7-43b5-b513-e24d7a6d79d1, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "f026872b-2716-44d6-ae5f-5cca2b5cc7dd", "force": true, "format": "json"}]: dispatch Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f026872b-2716-44d6-ae5f-5cca2b5cc7dd, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:40 localhost systemd[1]: tmp-crun.eD1AGX.mount: Deactivated successfully. Oct 14 06:20:40 localhost nova_compute[295778]: 2025-10-14 10:20:40.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:20:41 localhost nova_compute[295778]: 2025-10-14 10:20:41.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 197 MiB data, 1011 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 171 KiB/s wr, 43 op/s Oct 14 06:20:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:41 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1186436625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:42 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "auth_id": "david", "tenant_id": "c0a291c12d684e0180079f4c9858e70d", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 14 06:20:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:42 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID david with tenant c0a291c12d684e0180079f4c9858e70d Oct 14 06:20:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff", "osd", "allow rw pool=manila_data namespace=fsvolumens_343c69e2-b2b0-4638-b91f-68171be807ee", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff", "osd", "allow rw pool=manila_data namespace=fsvolumens_343c69e2-b2b0-4638-b91f-68171be807ee", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff", "osd", "allow rw pool=manila_data namespace=fsvolumens_343c69e2-b2b0-4638-b91f-68171be807ee", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, tenant_id:c0a291c12d684e0180079f4c9858e70d, vol_name:cephfs) < "" Oct 14 06:20:42 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:42 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff", "osd", "allow rw pool=manila_data namespace=fsvolumens_343c69e2-b2b0-4638-b91f-68171be807ee", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:42 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff", "osd", "allow rw pool=manila_data namespace=fsvolumens_343c69e2-b2b0-4638-b91f-68171be807ee", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "5929022c-b34d-4cb0-a7ed-8d30e380048c_78aa7921-b81e-4c36-a2fb-c54d6895cc9c", "force": true, "format": "json"}]: dispatch Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c_78aa7921-b81e-4c36-a2fb-c54d6895cc9c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c_78aa7921-b81e-4c36-a2fb-c54d6895cc9c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "5929022c-b34d-4cb0-a7ed-8d30e380048c", "force": true, "format": "json"}]: dispatch Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5929022c-b34d-4cb0-a7ed-8d30e380048c, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 197 MiB data, 1011 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 116 KiB/s wr, 29 op/s Oct 14 06:20:44 localhost nova_compute[295778]: 2025-10-14 10:20:44.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e210 do_prune osdmap full prune enabled Oct 14 06:20:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e211 e211: 6 total, 6 up, 6 in Oct 14 06:20:45 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e211: 6 total, 6 up, 6 in Oct 14 06:20:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 453 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 99 KiB/s rd, 43 MiB/s wr, 177 op/s Oct 14 06:20:45 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:45 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a27e628-ba54-4cb6-9e1c-c2178456fa95/.meta.tmp' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a27e628-ba54-4cb6-9e1c-c2178456fa95/.meta.tmp' to config b'/volumes/_nogroup/7a27e628-ba54-4cb6-9e1c-c2178456fa95/.meta' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "format": "json"}]: dispatch Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:46 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e211 do_prune osdmap full prune enabled Oct 14 06:20:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e212 e212: 6 total, 6 up, 6 in Oct 14 06:20:46 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e212: 6 total, 6 up, 6 in Oct 14 06:20:46 localhost nova_compute[295778]: 2025-10-14 10:20:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "debb450a-d993-4694-aef4-978f83e0e2e9_4871fc2d-a6b2-4ccc-b2cf-760f0f0e19a8", "force": true, "format": "json"}]: dispatch Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9_4871fc2d-a6b2-4ccc-b2cf-760f0f0e19a8, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9_4871fc2d-a6b2-4ccc-b2cf-760f0f0e19a8, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "snap_name": "debb450a-d993-4694-aef4-978f83e0e2e9", "force": true, "format": "json"}]: dispatch Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta.tmp' to config b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58/.meta' Oct 14 06:20:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:debb450a-d993-4694-aef4-978f83e0e2e9, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e212 do_prune osdmap full prune enabled Oct 14 06:20:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e213 e213: 6 total, 6 up, 6 in Oct 14 06:20:47 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e213: 6 total, 6 up, 6 in Oct 14 06:20:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 453 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 88 KiB/s rd, 43 MiB/s wr, 147 op/s Oct 14 06:20:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/.meta.tmp' Oct 14 06:20:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/.meta.tmp' to config b'/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/.meta' Oct 14 06:20:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "format": "json"}]: dispatch Oct 14 06:20:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e213 do_prune osdmap full prune enabled Oct 14 06:20:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e214 e214: 6 total, 6 up, 6 in Oct 14 06:20:48 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e214: 6 total, 6 up, 6 in Oct 14 06:20:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "auth_id": "david", "tenant_id": "7d5ad1bff55849678b704b0426176444", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 14 06:20:48 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:48 localhost ceph-mgr[300442]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Oct 14 06:20:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, tenant_id:7d5ad1bff55849678b704b0426176444, vol_name:cephfs) < "" Oct 14 06:20:48 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:48.956+0000 7ff5d7f75640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Oct 14 06:20:48 localhost ceph-mgr[300442]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Oct 14 06:20:49 localhost nova_compute[295778]: 2025-10-14 10:20:49.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:49 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 453 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 121 KiB/s rd, 59 MiB/s wr, 203 op/s Oct 14 06:20:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "format": "json"}]: dispatch Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:49 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:49.732+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '894593a6-c3dd-4d9c-94a7-5004b829ea58' of type subvolume Oct 14 06:20:49 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '894593a6-c3dd-4d9c-94a7-5004b829ea58' of type subvolume Oct 14 06:20:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "894593a6-c3dd-4d9c-94a7-5004b829ea58", "force": true, "format": "json"}]: dispatch Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/894593a6-c3dd-4d9c-94a7-5004b829ea58'' moved to trashcan Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:894593a6-c3dd-4d9c-94a7-5004b829ea58, vol_name:cephfs) < "" Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:20:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:20:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:51 localhost nova_compute[295778]: 2025-10-14 10:20:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 890 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 93 KiB/s rd, 73 MiB/s wr, 172 op/s Oct 14 06:20:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:20:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:20:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:20:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:20:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:20:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:51 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:51 localhost podman[343653]: 2025-10-14 10:20:51.583675087 +0000 UTC m=+0.118597035 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:20:51 localhost podman[343653]: 2025-10-14 10:20:51.631554891 +0000 UTC m=+0.166476909 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:20:51 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:20:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:20:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "auth_id": "david", "format": "json"}]: dispatch Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '7a27e628-ba54-4cb6-9e1c-c2178456fa95' Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "auth_id": "david", "format": "json"}]: dispatch Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/7a27e628-ba54-4cb6-9e1c-c2178456fa95/968a0ac2-089c-4ab4-9eee-633f27562616 Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:52 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:52 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:52 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 890 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 76 KiB/s rd, 60 MiB/s wr, 141 op/s Oct 14 06:20:54 localhost nova_compute[295778]: 2025-10-14 10:20:54.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:20:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:20:54 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:54 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:20:54 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:20:54 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:20:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:20:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e214 do_prune osdmap full prune enabled Oct 14 06:20:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e215 e215: 6 total, 6 up, 6 in Oct 14 06:20:55 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e215: 6 total, 6 up, 6 in Oct 14 06:20:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 198 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 172 KiB/s rd, 96 MiB/s wr, 326 op/s Oct 14 06:20:55 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:55 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:20:55 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:20:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "auth_id": "david", "format": "json"}]: dispatch Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 14 06:20:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Oct 14 06:20:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 14 06:20:55 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "auth_id": "david", "format": "json"}]: dispatch Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee/d81f744d-ecd5-450e-b4f4-9060c1e362ff Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:20:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:20:56 localhost nova_compute[295778]: 2025-10-14 10:20:56.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 14 06:20:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 14 06:20:56 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Oct 14 06:20:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e215 do_prune osdmap full prune enabled Oct 14 06:20:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e216 e216: 6 total, 6 up, 6 in Oct 14 06:20:56 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e216: 6 total, 6 up, 6 in Oct 14 06:20:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 198 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 172 KiB/s rd, 96 MiB/s wr, 326 op/s Oct 14 06:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:20:57 localhost podman[343676]: 2025-10-14 10:20:57.560754778 +0000 UTC m=+0.089006798 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:20:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3226f5dc-33cd-4972-a5c5-a246f84f4625", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:20:57 localhost podman[343675]: 2025-10-14 10:20:57.605611462 +0000 UTC m=+0.138850935 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 14 06:20:57 localhost podman[343675]: 2025-10-14 10:20:57.61155194 +0000 UTC m=+0.144791493 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:20:57 localhost podman[343676]: 2025-10-14 10:20:57.624244027 +0000 UTC m=+0.152496017 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:20:57 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:20:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:57.644 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:20:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:57.645 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:20:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:20:57.645 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:20:57 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/3226f5dc-33cd-4972-a5c5-a246f84f4625/.meta.tmp' Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/3226f5dc-33cd-4972-a5c5-a246f84f4625/.meta.tmp' to config b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/3226f5dc-33cd-4972-a5c5-a246f84f4625/.meta' Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3226f5dc-33cd-4972-a5c5-a246f84f4625", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume getpath, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume getpath, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e879a16-a13a-427f-960d-74f27902f66c", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/2e879a16-a13a-427f-960d-74f27902f66c/.meta.tmp' Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/2e879a16-a13a-427f-960d-74f27902f66c/.meta.tmp' to config b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/2e879a16-a13a-427f-960d-74f27902f66c/.meta' Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e879a16-a13a-427f-960d-74f27902f66c", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume getpath, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume getpath, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:20:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f43db045-2fb7-48a4-b8b2-3c671e6b1ee8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "format": "json"}]: dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/cb16bd83-978e-4905-9591-cd016430587c/f43db045-2fb7-48a4-b8b2-3c671e6b1ee8/.meta.tmp' Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/cb16bd83-978e-4905-9591-cd016430587c/f43db045-2fb7-48a4-b8b2-3c671e6b1ee8/.meta.tmp' to config b'/volumes/cb16bd83-978e-4905-9591-cd016430587c/f43db045-2fb7-48a4-b8b2-3c671e6b1ee8/.meta' Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f43db045-2fb7-48a4-b8b2-3c671e6b1ee8", "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "format": "json"}]: dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolume getpath, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolume getpath, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "861ea875-98ef-40e2-b6cc-dda240f881fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/861ea875-98ef-40e2-b6cc-dda240f881fc/.meta.tmp' Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/861ea875-98ef-40e2-b6cc-dda240f881fc/.meta.tmp' to config b'/volumes/_nogroup/861ea875-98ef-40e2-b6cc-dda240f881fc/.meta' Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "861ea875-98ef-40e2-b6cc-dda240f881fc", "format": "json"}]: dispatch Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e216 do_prune osdmap full prune enabled Oct 14 06:20:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e217 e217: 6 total, 6 up, 6 in Oct 14 06:20:58 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e217: 6 total, 6 up, 6 in Oct 14 06:20:58 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:20:58 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:20:59 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "format": "json"}]: dispatch Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:59 localhost nova_compute[295778]: 2025-10-14 10:20:59.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:20:59 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a27e628-ba54-4cb6-9e1c-c2178456fa95' of type subvolume Oct 14 06:20:59 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:20:59.201+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a27e628-ba54-4cb6-9e1c-c2178456fa95' of type subvolume Oct 14 06:20:59 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a27e628-ba54-4cb6-9e1c-c2178456fa95", "force": true, "format": "json"}]: dispatch Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a27e628-ba54-4cb6-9e1c-c2178456fa95'' moved to trashcan Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:20:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a27e628-ba54-4cb6-9e1c-c2178456fa95, vol_name:cephfs) < "" Oct 14 06:20:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 198 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 137 KiB/s rd, 55 MiB/s wr, 263 op/s Oct 14 06:21:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9db203ad-34aa-415e-933d-fcc7207e4298", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9db203ad-34aa-415e-933d-fcc7207e4298/.meta.tmp' Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9db203ad-34aa-415e-933d-fcc7207e4298/.meta.tmp' to config b'/volumes/_nogroup/9db203ad-34aa-415e-933d-fcc7207e4298/.meta' Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9db203ad-34aa-415e-933d-fcc7207e4298", "format": "json"}]: dispatch Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:00 localhost podman[246584]: time="2025-10-14T10:21:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:21:00 localhost podman[246584]: @ - - [14/Oct/2025:10:21:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:21:00 localhost podman[246584]: @ - - [14/Oct/2025:10:21:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18910 "" "Go-http-client/1.1" Oct 14 06:21:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e217 do_prune osdmap full prune enabled Oct 14 06:21:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e218 e218: 6 total, 6 up, 6 in Oct 14 06:21:01 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e218: 6 total, 6 up, 6 in Oct 14 06:21:01 localhost nova_compute[295778]: 2025-10-14 10:21:01.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:21:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:01 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:21:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:01 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 117 KiB/s wr, 63 op/s Oct 14 06:21:02 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:02 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:02 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "861ea875-98ef-40e2-b6cc-dda240f881fc", "format": "json"}]: dispatch Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:861ea875-98ef-40e2-b6cc-dda240f881fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:861ea875-98ef-40e2-b6cc-dda240f881fc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:02 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:02.140+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '861ea875-98ef-40e2-b6cc-dda240f881fc' of type subvolume Oct 14 06:21:02 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '861ea875-98ef-40e2-b6cc-dda240f881fc' of type subvolume Oct 14 06:21:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "861ea875-98ef-40e2-b6cc-dda240f881fc", "force": true, "format": "json"}]: dispatch Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/861ea875-98ef-40e2-b6cc-dda240f881fc'' moved to trashcan Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:861ea875-98ef-40e2-b6cc-dda240f881fc, vol_name:cephfs) < "" Oct 14 06:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:21:02 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:02.319 270389 INFO neutron.agent.linux.ip_lib [None req-a4200852-343c-46f6-b98f-cd046475afbc - - - - - -] Device tap23deb9f6-61 cannot be used as it has no MAC address#033[00m Oct 14 06:21:02 localhost nova_compute[295778]: 2025-10-14 10:21:02.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:02 localhost kernel: device tap23deb9f6-61 entered promiscuous mode Oct 14 06:21:02 localhost NetworkManager[5972]: [1760437262.3616] manager: (tap23deb9f6-61): new Generic device (/org/freedesktop/NetworkManager/Devices/72) Oct 14 06:21:02 localhost nova_compute[295778]: 2025-10-14 10:21:02.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:02 localhost ovn_controller[156286]: 2025-10-14T10:21:02Z|00403|binding|INFO|Claiming lport 23deb9f6-616f-448c-ae46-d7f2a199026d for this chassis. Oct 14 06:21:02 localhost ovn_controller[156286]: 2025-10-14T10:21:02Z|00404|binding|INFO|23deb9f6-616f-448c-ae46-d7f2a199026d: Claiming unknown Oct 14 06:21:02 localhost systemd-udevd[343761]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:21:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:02.375 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-bbca6473-a82b-41d0-9b1d-2e7c91770bbe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbca6473-a82b-41d0-9b1d-2e7c91770bbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ccd80ea567d40a9bfd81db22cd13b2f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35a7b0f0-f490-4a04-9417-555ec81bac5a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=23deb9f6-616f-448c-ae46-d7f2a199026d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:21:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:02.377 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 23deb9f6-616f-448c-ae46-d7f2a199026d in datapath bbca6473-a82b-41d0-9b1d-2e7c91770bbe bound to our chassis#033[00m Oct 14 06:21:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:02.380 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Port 807a1632-b630-4473-94d6-d54daff44ed9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 14 06:21:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:02.380 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbca6473-a82b-41d0-9b1d-2e7c91770bbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:21:02 localhost podman[343720]: 2025-10-14 10:21:02.383919163 +0000 UTC m=+0.129441875 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:21:02 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:02.381 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[60ac6669-32fe-44cb-893d-d7f7c53c9189]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:21:02 localhost podman[343719]: 2025-10-14 10:21:02.344801083 +0000 UTC m=+0.095774970 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:21:02 localhost podman[343720]: 2025-10-14 10:21:02.401342897 +0000 UTC m=+0.146865639 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:21:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "format": "json"}]: dispatch Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:02 localhost ovn_controller[156286]: 2025-10-14T10:21:02Z|00405|binding|INFO|Setting lport 23deb9f6-616f-448c-ae46-d7f2a199026d ovn-installed in OVS Oct 14 06:21:02 localhost ovn_controller[156286]: 2025-10-14T10:21:02Z|00406|binding|INFO|Setting lport 23deb9f6-616f-448c-ae46-d7f2a199026d up in Southbound Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:02 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:02.418+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0eb88f0a-5596-479e-940f-8e6a2102a41a' of type subvolume Oct 14 06:21:02 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0eb88f0a-5596-479e-940f-8e6a2102a41a' of type subvolume Oct 14 06:21:02 localhost nova_compute[295778]: 2025-10-14 10:21:02.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:02 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:21:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0eb88f0a-5596-479e-940f-8e6a2102a41a", "force": true, "format": "json"}]: dispatch Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:21:02 localhost podman[343719]: 2025-10-14 10:21:02.432352071 +0000 UTC m=+0.183325928 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0eb88f0a-5596-479e-940f-8e6a2102a41a'' moved to trashcan Oct 14 06:21:02 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0eb88f0a-5596-479e-940f-8e6a2102a41a, vol_name:cephfs) < "" Oct 14 06:21:02 localhost nova_compute[295778]: 2025-10-14 10:21:02.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:02 localhost nova_compute[295778]: 2025-10-14 10:21:02.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:03 localhost openstack_network_exporter[248748]: ERROR 10:21:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:21:03 localhost openstack_network_exporter[248748]: ERROR 10:21:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:21:03 localhost openstack_network_exporter[248748]: ERROR 10:21:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:21:03 localhost openstack_network_exporter[248748]: ERROR 10:21:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:21:03 localhost openstack_network_exporter[248748]: Oct 14 06:21:03 localhost openstack_network_exporter[248748]: ERROR 10:21:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:21:03 localhost openstack_network_exporter[248748]: Oct 14 06:21:03 localhost podman[343820]: Oct 14 06:21:03 localhost podman[343820]: 2025-10-14 10:21:03.388442447 +0000 UTC m=+0.143770716 container create 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 14 06:21:03 localhost podman[343820]: 2025-10-14 10:21:03.290195393 +0000 UTC m=+0.045523702 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:21:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 104 KiB/s wr, 56 op/s Oct 14 06:21:03 localhost systemd[1]: Started libpod-conmon-9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23.scope. Oct 14 06:21:03 localhost systemd[1]: Started libcrun container. Oct 14 06:21:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b11ea2f347287c9fe44de9fab9dcad7b5518dde9ed4283a4529944a87905e20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:21:03 localhost podman[343820]: 2025-10-14 10:21:03.466065672 +0000 UTC m=+0.221393941 container init 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:21:03 localhost podman[343820]: 2025-10-14 10:21:03.47238881 +0000 UTC m=+0.227717079 container start 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:21:03 localhost dnsmasq[343838]: started, version 2.85 cachesize 150 Oct 14 06:21:03 localhost dnsmasq[343838]: DNS service limited to local subnets Oct 14 06:21:03 localhost dnsmasq[343838]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:21:03 localhost dnsmasq[343838]: warning: no upstream servers configured Oct 14 06:21:03 localhost dnsmasq-dhcp[343838]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:21:03 localhost dnsmasq[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/addn_hosts - 0 addresses Oct 14 06:21:03 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/host Oct 14 06:21:03 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/opts Oct 14 06:21:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:03.610 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:21:03Z, description=, device_id=3cf1c66f-ba39-44a2-95bc-f46e66200037, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5718ed2e-2135-44cf-bc38-83ca5cd9b36d, ip_allocation=immediate, mac_address=fa:16:3e:52:73:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:21:00Z, description=, dns_domain=, id=bbca6473-a82b-41d0-9b1d-2e7c91770bbe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-144678220-network, port_security_enabled=True, project_id=1ccd80ea567d40a9bfd81db22cd13b2f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17075, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3287, status=ACTIVE, subnets=['2d3bf6b3-3ea3-4dde-b3f4-546965617a6f'], tags=[], tenant_id=1ccd80ea567d40a9bfd81db22cd13b2f, updated_at=2025-10-14T10:21:00Z, vlan_transparent=None, network_id=bbca6473-a82b-41d0-9b1d-2e7c91770bbe, port_security_enabled=False, project_id=1ccd80ea567d40a9bfd81db22cd13b2f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3295, status=DOWN, tags=[], tenant_id=1ccd80ea567d40a9bfd81db22cd13b2f, updated_at=2025-10-14T10:21:03Z on network bbca6473-a82b-41d0-9b1d-2e7c91770bbe#033[00m Oct 14 06:21:03 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:03.726 270389 INFO neutron.agent.dhcp.agent [None req-8f8b12b3-7eec-4743-a419-0d6b49ec85bc - - - - - -] DHCP configuration for ports {'df81a730-de97-4d0f-880b-0c17a30e586d'} is completed#033[00m Oct 14 06:21:03 localhost podman[343854]: 2025-10-14 10:21:03.903272973 +0000 UTC m=+0.061768064 container kill 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true) Oct 14 06:21:03 localhost dnsmasq[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/addn_hosts - 1 addresses Oct 14 06:21:03 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/host Oct 14 06:21:03 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/opts Oct 14 06:21:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:04.193 270389 INFO neutron.agent.dhcp.agent [None req-28f62ae1-41ce-47f7-bc00-73af7a3abe77 - - - - - -] DHCP configuration for ports {'5718ed2e-2135-44cf-bc38-83ca5cd9b36d'} is completed#033[00m Oct 14 06:21:04 localhost nova_compute[295778]: 2025-10-14 10:21:04.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e218 do_prune osdmap full prune enabled Oct 14 06:21:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e219 e219: 6 total, 6 up, 6 in Oct 14 06:21:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e219: 6 total, 6 up, 6 in Oct 14 06:21:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:04.437 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:21:03Z, description=, device_id=3cf1c66f-ba39-44a2-95bc-f46e66200037, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5718ed2e-2135-44cf-bc38-83ca5cd9b36d, ip_allocation=immediate, mac_address=fa:16:3e:52:73:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:21:00Z, description=, dns_domain=, id=bbca6473-a82b-41d0-9b1d-2e7c91770bbe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-144678220-network, port_security_enabled=True, project_id=1ccd80ea567d40a9bfd81db22cd13b2f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17075, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3287, status=ACTIVE, subnets=['2d3bf6b3-3ea3-4dde-b3f4-546965617a6f'], tags=[], tenant_id=1ccd80ea567d40a9bfd81db22cd13b2f, updated_at=2025-10-14T10:21:00Z, vlan_transparent=None, network_id=bbca6473-a82b-41d0-9b1d-2e7c91770bbe, port_security_enabled=False, project_id=1ccd80ea567d40a9bfd81db22cd13b2f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3295, status=DOWN, tags=[], tenant_id=1ccd80ea567d40a9bfd81db22cd13b2f, updated_at=2025-10-14T10:21:03Z on network bbca6473-a82b-41d0-9b1d-2e7c91770bbe#033[00m Oct 14 06:21:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:21:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:04 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:04 localhost dnsmasq[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/addn_hosts - 1 addresses Oct 14 06:21:04 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/host Oct 14 06:21:04 localhost podman[343893]: 2025-10-14 10:21:04.635985266 +0000 UTC m=+0.054151932 container kill 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:21:04 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/opts Oct 14 06:21:04 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:04.946 270389 INFO neutron.agent.dhcp.agent [None req-73939adc-6b49-4df7-a33a-ee6e9adc9071 - - - - - -] DHCP configuration for ports {'5718ed2e-2135-44cf-bc38-83ca5cd9b36d'} is completed#033[00m Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e219 do_prune osdmap full prune enabled Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e220 e220: 6 total, 6 up, 6 in Oct 14 06:21:05 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e220: 6 total, 6 up, 6 in Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:21:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/778396529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:21:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/778396529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:21:05 localhost ovn_controller[156286]: 2025-10-14T10:21:05Z|00407|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:21:05 localhost ovn_controller[156286]: 2025-10-14T10:21:05Z|00408|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:21:05 localhost ovn_controller[156286]: 2025-10-14T10:21:05Z|00409|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:21:05 localhost nova_compute[295778]: 2025-10-14 10:21:05.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:05 localhost nova_compute[295778]: 2025-10-14 10:21:05.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:05 localhost nova_compute[295778]: 2025-10-14 10:21:05.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:05 localhost nova_compute[295778]: 2025-10-14 10:21:05.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57dfe35c-de82-4809-8c98-457f9894a5ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 221 KiB/s wr, 148 op/s Oct 14 06:21:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:05 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57dfe35c-de82-4809-8c98-457f9894a5ab/.meta.tmp' Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57dfe35c-de82-4809-8c98-457f9894a5ab/.meta.tmp' to config b'/volumes/_nogroup/57dfe35c-de82-4809-8c98-457f9894a5ab/.meta' Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57dfe35c-de82-4809-8c98-457f9894a5ab", "format": "json"}]: dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "snap_name": "8e079fb7-b93f-42b7-ad10-c3dd56bdced4", "force": true, "format": "json"}]: dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolumegroup snapshot rm, snap_name:8e079fb7-b93f-42b7-ad10-c3dd56bdced4, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolumegroup snapshot rm, snap_name:8e079fb7-b93f-42b7-ad10-c3dd56bdced4, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:05 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "format": "json"}]: dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:05.640+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9f8ac4b-082a-4fd7-a4ce-57524c5e0629' of type subvolume Oct 14 06:21:05 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9f8ac4b-082a-4fd7-a4ce-57524c5e0629' of type subvolume Oct 14 06:21:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9f8ac4b-082a-4fd7-a4ce-57524c5e0629", "force": true, "format": "json"}]: dispatch Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c9f8ac4b-082a-4fd7-a4ce-57524c5e0629'' moved to trashcan Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9f8ac4b-082a-4fd7-a4ce-57524c5e0629, vol_name:cephfs) < "" Oct 14 06:21:06 localhost nova_compute[295778]: 2025-10-14 10:21:06.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:06 localhost nova_compute[295778]: 2025-10-14 10:21:06.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:06 localhost nova_compute[295778]: 2025-10-14 10:21:06.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9db203ad-34aa-415e-933d-fcc7207e4298", "format": "json"}]: dispatch Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9db203ad-34aa-415e-933d-fcc7207e4298, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9db203ad-34aa-415e-933d-fcc7207e4298, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:06 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:06.328+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9db203ad-34aa-415e-933d-fcc7207e4298' of type subvolume Oct 14 06:21:06 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9db203ad-34aa-415e-933d-fcc7207e4298' of type subvolume Oct 14 06:21:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9db203ad-34aa-415e-933d-fcc7207e4298", "force": true, "format": "json"}]: dispatch Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9db203ad-34aa-415e-933d-fcc7207e4298'' moved to trashcan Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9db203ad-34aa-415e-933d-fcc7207e4298, vol_name:cephfs) < "" Oct 14 06:21:06 localhost nova_compute[295778]: 2025-10-14 10:21:06.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:07 localhost nova_compute[295778]: 2025-10-14 10:21:07.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:07 localhost nova_compute[295778]: 2025-10-14 10:21:07.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 99 KiB/s wr, 80 op/s Oct 14 06:21:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:21:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:07 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:08 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:08 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f43db045-2fb7-48a4-b8b2-3c671e6b1ee8", "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "format": "json"}]: dispatch Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:08.633+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f43db045-2fb7-48a4-b8b2-3c671e6b1ee8' of type subvolume Oct 14 06:21:08 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f43db045-2fb7-48a4-b8b2-3c671e6b1ee8' of type subvolume Oct 14 06:21:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f43db045-2fb7-48a4-b8b2-3c671e6b1ee8", "force": true, "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "format": "json"}]: dispatch Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolume rm, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/cb16bd83-978e-4905-9591-cd016430587c/f43db045-2fb7-48a4-b8b2-3c671e6b1ee8'' moved to trashcan Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolume rm, sub_name:f43db045-2fb7-48a4-b8b2-3c671e6b1ee8, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57dfe35c-de82-4809-8c98-457f9894a5ab", "format": "json"}]: dispatch Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57dfe35c-de82-4809-8c98-457f9894a5ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57dfe35c-de82-4809-8c98-457f9894a5ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57dfe35c-de82-4809-8c98-457f9894a5ab' of type subvolume Oct 14 06:21:08 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:08.883+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57dfe35c-de82-4809-8c98-457f9894a5ab' of type subvolume Oct 14 06:21:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57dfe35c-de82-4809-8c98-457f9894a5ab", "force": true, "format": "json"}]: dispatch Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57dfe35c-de82-4809-8c98-457f9894a5ab'' moved to trashcan Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57dfe35c-de82-4809-8c98-457f9894a5ab, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "auth_id": "admin", "format": "json"}]: dispatch Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:21:09 Oct 14 06:21:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:21:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:21:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['.mgr', 'manila_data', 'images', 'vms', 'manila_metadata', 'volumes', 'backups'] Oct 14 06:21:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Oct 14 06:21:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:09.106+0000 7ff5d7f75640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:09 localhost nova_compute[295778]: 2025-10-14 10:21:09.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "format": "json"}]: dispatch Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:343c69e2-b2b0-4638-b91f-68171be807ee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:343c69e2-b2b0-4638-b91f-68171be807ee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:09.386+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '343c69e2-b2b0-4638-b91f-68171be807ee' of type subvolume Oct 14 06:21:09 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '343c69e2-b2b0-4638-b91f-68171be807ee' of type subvolume Oct 14 06:21:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "343c69e2-b2b0-4638-b91f-68171be807ee", "force": true, "format": "json"}]: dispatch Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 78 KiB/s wr, 63 op/s Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/343c69e2-b2b0-4638-b91f-68171be807ee'' moved to trashcan Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:343c69e2-b2b0-4638-b91f-68171be807ee, vol_name:cephfs) < "" Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:21:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e220 do_prune osdmap full prune enabled Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014874720826353993 of space, bias 1.0, pg target 0.2969985924995347 quantized to 32 (current 32) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.9989356504745952e-06 of space, bias 1.0, pg target 0.0005967881944444444 quantized to 32 (current 32) Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:21:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.000518815867532105 of space, bias 4.0, pg target 0.4129774305555556 quantized to 16 (current 16) Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:21:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:21:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e221 e221: 6 total, 6 up, 6 in Oct 14 06:21:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e221: 6 total, 6 up, 6 in Oct 14 06:21:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e221 do_prune osdmap full prune enabled Oct 14 06:21:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e222 e222: 6 total, 6 up, 6 in Oct 14 06:21:10 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e222: 6 total, 6 up, 6 in Oct 14 06:21:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:21:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:21:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:21:10 localhost podman[343918]: 2025-10-14 10:21:10.618038821 +0000 UTC m=+0.150967968 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, vcs-type=git) Oct 14 06:21:10 localhost podman[343920]: 2025-10-14 10:21:10.575971551 +0000 UTC m=+0.102581990 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:21:10 localhost podman[343918]: 2025-10-14 10:21:10.659229977 +0000 UTC m=+0.192159104 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 14 06:21:10 localhost podman[343919]: 2025-10-14 10:21:10.671082612 +0000 UTC m=+0.199872509 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:21:10 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:21:10 localhost podman[343920]: 2025-10-14 10:21:10.710361847 +0000 UTC m=+0.236972246 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:21:10 localhost podman[343919]: 2025-10-14 10:21:10.713407198 +0000 UTC m=+0.242197095 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:21:10 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:21:10 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:21:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:21:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:11 localhost nova_compute[295778]: 2025-10-14 10:21:11.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:11 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:11 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:21:11 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1530181701' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:21:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:21:11 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1530181701' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:21:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 115 KiB/s rd, 211 KiB/s wr, 172 op/s Oct 14 06:21:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:11 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e879a16-a13a-427f-960d-74f27902f66c", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2e879a16-a13a-427f-960d-74f27902f66c, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2e879a16-a13a-427f-960d-74f27902f66c, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:12.213+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e879a16-a13a-427f-960d-74f27902f66c' of type subvolume Oct 14 06:21:12 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e879a16-a13a-427f-960d-74f27902f66c' of type subvolume Oct 14 06:21:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e879a16-a13a-427f-960d-74f27902f66c", "force": true, "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume rm, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/2e879a16-a13a-427f-960d-74f27902f66c'' moved to trashcan Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume rm, sub_name:2e879a16-a13a-427f-960d-74f27902f66c, vol_name:cephfs) < "" Oct 14 06:21:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 89 KiB/s wr, 73 op/s Oct 14 06:21:14 localhost nova_compute[295778]: 2025-10-14 10:21:14.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e222 do_prune osdmap full prune enabled Oct 14 06:21:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e223 e223: 6 total, 6 up, 6 in Oct 14 06:21:14 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e223: 6 total, 6 up, 6 in Oct 14 06:21:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e223 do_prune osdmap full prune enabled Oct 14 06:21:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e224 e224: 6 total, 6 up, 6 in Oct 14 06:21:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e224: 6 total, 6 up, 6 in Oct 14 06:21:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:21:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:15 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 170 KiB/s wr, 164 op/s Oct 14 06:21:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3226f5dc-33cd-4972-a5c5-a246f84f4625", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:15.471+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3226f5dc-33cd-4972-a5c5-a246f84f4625' of type subvolume Oct 14 06:21:15 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3226f5dc-33cd-4972-a5c5-a246f84f4625' of type subvolume Oct 14 06:21:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3226f5dc-33cd-4972-a5c5-a246f84f4625", "force": true, "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "format": "json"}]: dispatch Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume rm, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/dde727d8-0fca-4f65-b7dd-370696baa208/3226f5dc-33cd-4972-a5c5-a246f84f4625'' moved to trashcan Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolume rm, sub_name:3226f5dc-33cd-4972-a5c5-a246f84f4625, vol_name:cephfs) < "" Oct 14 06:21:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:15 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:16 localhost nova_compute[295778]: 2025-10-14 10:21:16.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 137 KiB/s wr, 132 op/s Oct 14 06:21:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e224 do_prune osdmap full prune enabled Oct 14 06:21:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e225 e225: 6 total, 6 up, 6 in Oct 14 06:21:17 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e225: 6 total, 6 up, 6 in Oct 14 06:21:18 localhost dnsmasq[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/addn_hosts - 0 addresses Oct 14 06:21:18 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/host Oct 14 06:21:18 localhost systemd[1]: tmp-crun.QQDAkH.mount: Deactivated successfully. Oct 14 06:21:18 localhost dnsmasq-dhcp[343838]: read /var/lib/neutron/dhcp/bbca6473-a82b-41d0-9b1d-2e7c91770bbe/opts Oct 14 06:21:18 localhost podman[344000]: 2025-10-14 10:21:18.482897016 +0000 UTC m=+0.073847975 container kill 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:21:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:21:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:21:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:18 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:18 localhost ovn_controller[156286]: 2025-10-14T10:21:18Z|00410|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:21:18 localhost ovn_controller[156286]: 2025-10-14T10:21:18Z|00411|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:21:18 localhost ovn_controller[156286]: 2025-10-14T10:21:18Z|00412|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:21:18 localhost nova_compute[295778]: 2025-10-14 10:21:18.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:18 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:18 localhost nova_compute[295778]: 2025-10-14 10:21:18.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:18 localhost nova_compute[295778]: 2025-10-14 10:21:18.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:18 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "cb16bd83-978e-4905-9591-cd016430587c", "force": true, "format": "json"}]: dispatch Oct 14 06:21:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cb16bd83-978e-4905-9591-cd016430587c, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:19 localhost nova_compute[295778]: 2025-10-14 10:21:19.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 49 KiB/s wr, 65 op/s Oct 14 06:21:19 localhost nova_compute[295778]: 2025-10-14 10:21:19.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:19 localhost ovn_controller[156286]: 2025-10-14T10:21:19Z|00413|binding|INFO|Releasing lport 23deb9f6-616f-448c-ae46-d7f2a199026d from this chassis (sb_readonly=0) Oct 14 06:21:19 localhost kernel: device tap23deb9f6-61 left promiscuous mode Oct 14 06:21:19 localhost ovn_controller[156286]: 2025-10-14T10:21:19Z|00414|binding|INFO|Setting lport 23deb9f6-616f-448c-ae46-d7f2a199026d down in Southbound Oct 14 06:21:19 localhost nova_compute[295778]: 2025-10-14 10:21:19.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "dde727d8-0fca-4f65-b7dd-370696baa208", "force": true, "format": "json"}]: dispatch Oct 14 06:21:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dde727d8-0fca-4f65-b7dd-370696baa208, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:19.608 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-bbca6473-a82b-41d0-9b1d-2e7c91770bbe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbca6473-a82b-41d0-9b1d-2e7c91770bbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ccd80ea567d40a9bfd81db22cd13b2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35a7b0f0-f490-4a04-9417-555ec81bac5a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=23deb9f6-616f-448c-ae46-d7f2a199026d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:21:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:19.610 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 23deb9f6-616f-448c-ae46-d7f2a199026d in datapath bbca6473-a82b-41d0-9b1d-2e7c91770bbe unbound from our chassis#033[00m Oct 14 06:21:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:19.612 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbca6473-a82b-41d0-9b1d-2e7c91770bbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:21:19 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:19.613 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0a37e5a0-7dad-484e-bc50-4bef8328cd68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:21:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e225 do_prune osdmap full prune enabled Oct 14 06:21:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e226 e226: 6 total, 6 up, 6 in Oct 14 06:21:19 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e226: 6 total, 6 up, 6 in Oct 14 06:21:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:21 localhost nova_compute[295778]: 2025-10-14 10:21:21.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 119 KiB/s rd, 116 KiB/s wr, 174 op/s Oct 14 06:21:21 localhost nova_compute[295778]: 2025-10-14 10:21:21.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:21 localhost nova_compute[295778]: 2025-10-14 10:21:21.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:21:21 localhost nova_compute[295778]: 2025-10-14 10:21:21.963 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:21:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:21:22 localhost podman[344024]: 2025-10-14 10:21:22.537992497 +0000 UTC m=+0.078434877 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:21:22 localhost podman[344024]: 2025-10-14 10:21:22.551276531 +0000 UTC m=+0.091718981 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm) Oct 14 06:21:22 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:21:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1321257980' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1321257980' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:21:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 55 KiB/s wr, 89 op/s Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e226 do_prune osdmap full prune enabled Oct 14 06:21:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:21:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:21:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e227 e227: 6 total, 6 up, 6 in Oct 14 06:21:23 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e227: 6 total, 6 up, 6 in Oct 14 06:21:24 localhost dnsmasq[343838]: exiting on receipt of SIGTERM Oct 14 06:21:24 localhost podman[344061]: 2025-10-14 10:21:24.225324106 +0000 UTC m=+0.068854482 container kill 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:21:24 localhost systemd[1]: libpod-9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23.scope: Deactivated successfully. Oct 14 06:21:24 localhost nova_compute[295778]: 2025-10-14 10:21:24.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:24 localhost podman[344075]: 2025-10-14 10:21:24.309493845 +0000 UTC m=+0.065625426 container died 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 14 06:21:24 localhost podman[344075]: 2025-10-14 10:21:24.345628167 +0000 UTC m=+0.101759718 container cleanup 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:21:24 localhost systemd[1]: libpod-conmon-9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23.scope: Deactivated successfully. Oct 14 06:21:24 localhost podman[344077]: 2025-10-14 10:21:24.384640575 +0000 UTC m=+0.132944278 container remove 9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbca6473-a82b-41d0-9b1d-2e7c91770bbe, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:21:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:24.410 270389 INFO neutron.agent.dhcp.agent [None req-339a7368-e98a-45d5-8e42-72842aa69936 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:21:24 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:21:24.430 270389 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:21:24 localhost nova_compute[295778]: 2025-10-14 10:21:24.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e227 do_prune osdmap full prune enabled Oct 14 06:21:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e228 e228: 6 total, 6 up, 6 in Oct 14 06:21:25 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e228: 6 total, 6 up, 6 in Oct 14 06:21:25 localhost systemd[1]: var-lib-containers-storage-overlay-4b11ea2f347287c9fe44de9fab9dcad7b5518dde9ed4283a4529944a87905e20-merged.mount: Deactivated successfully. Oct 14 06:21:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a151fdc1e3b76448c418c127da9cea21babd4c10af55c00268761594d63de23-userdata-shm.mount: Deactivated successfully. Oct 14 06:21:25 localhost systemd[1]: run-netns-qdhcp\x2dbbca6473\x2da82b\x2d41d0\x2d9b1d\x2d2e7c91770bbe.mount: Deactivated successfully. Oct 14 06:21:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 144 KiB/s rd, 104 KiB/s wr, 201 op/s Oct 14 06:21:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e228 do_prune osdmap full prune enabled Oct 14 06:21:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e229 e229: 6 total, 6 up, 6 in Oct 14 06:21:26 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e229: 6 total, 6 up, 6 in Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:26 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:21:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:21:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:26 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.963 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.995 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.996 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.996 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.997 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:21:26 localhost nova_compute[295778]: 2025-10-14 10:21:26.997 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:21:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:27 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 31 KiB/s wr, 81 op/s Oct 14 06:21:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:21:27 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2004478551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:21:27 localhost nova_compute[295778]: 2025-10-14 10:21:27.469 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:21:27 localhost nova_compute[295778]: 2025-10-14 10:21:27.684 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:21:27 localhost nova_compute[295778]: 2025-10-14 10:21:27.686 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11369MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:21:27 localhost nova_compute[295778]: 2025-10-14 10:21:27.686 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:21:27 localhost nova_compute[295778]: 2025-10-14 10:21:27.687 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:21:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e229 do_prune osdmap full prune enabled Oct 14 06:21:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e230 e230: 6 total, 6 up, 6 in Oct 14 06:21:28 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e230: 6 total, 6 up, 6 in Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.183 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.184 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.204 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.241 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.241 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.262 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.292 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.321 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:21:28 localhost podman[344136]: 2025-10-14 10:21:28.544194306 +0000 UTC m=+0.082602929 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:21:28 localhost podman[344136]: 2025-10-14 10:21:28.552015514 +0000 UTC m=+0.090424087 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:21:28 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:21:28 localhost systemd[1]: tmp-crun.vuImbK.mount: Deactivated successfully. Oct 14 06:21:28 localhost podman[344137]: 2025-10-14 10:21:28.605960189 +0000 UTC m=+0.141137166 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:21:28 localhost podman[344137]: 2025-10-14 10:21:28.617264319 +0000 UTC m=+0.152441356 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:21:28 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.782 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.789 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.804 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.807 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:21:28 localhost nova_compute[295778]: 2025-10-14 10:21:28.807 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:21:29 localhost nova_compute[295778]: 2025-10-14 10:21:29.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 32 KiB/s wr, 85 op/s Oct 14 06:21:29 localhost nova_compute[295778]: 2025-10-14 10:21:29.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:21:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:21:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:21:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:21:30 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:21:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:30 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:21:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:30 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:30 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:21:30 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:21:30 localhost podman[246584]: time="2025-10-14T10:21:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:21:30 localhost podman[246584]: @ - - [14/Oct/2025:10:21:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:21:30 localhost podman[246584]: @ - - [14/Oct/2025:10:21:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18907 "" "Go-http-client/1.1" Oct 14 06:21:30 localhost nova_compute[295778]: 2025-10-14 10:21:30.921 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e230 do_prune osdmap full prune enabled Oct 14 06:21:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e231 e231: 6 total, 6 up, 6 in Oct 14 06:21:31 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e231: 6 total, 6 up, 6 in Oct 14 06:21:31 localhost nova_compute[295778]: 2025-10-14 10:21:31.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 60 KiB/s wr, 53 op/s Oct 14 06:21:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:21:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:21:32 localhost podman[344189]: 2025-10-14 10:21:32.555203953 +0000 UTC m=+0.095124471 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:21:32 localhost podman[344189]: 2025-10-14 10:21:32.566599587 +0000 UTC m=+0.106520125 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 14 06:21:32 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:21:32 localhost systemd[1]: tmp-crun.3fWZUv.mount: Deactivated successfully. Oct 14 06:21:32 localhost podman[344205]: 2025-10-14 10:21:32.65282147 +0000 UTC m=+0.088029953 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true) Oct 14 06:21:32 localhost podman[344205]: 2025-10-14 10:21:32.690117352 +0000 UTC m=+0.125325806 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 14 06:21:32 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:21:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:32.725 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:21:32 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:32.726 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:21:32 localhost nova_compute[295778]: 2025-10-14 10:21:32.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta' Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "format": "json"}]: dispatch Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:33 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:21:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:21:33 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:33 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e231 do_prune osdmap full prune enabled Oct 14 06:21:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e232 e232: 6 total, 6 up, 6 in Oct 14 06:21:33 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e232: 6 total, 6 up, 6 in Oct 14 06:21:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:33 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:33 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:33 localhost openstack_network_exporter[248748]: ERROR 10:21:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:21:33 localhost openstack_network_exporter[248748]: ERROR 10:21:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:21:33 localhost openstack_network_exporter[248748]: ERROR 10:21:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:21:33 localhost openstack_network_exporter[248748]: ERROR 10:21:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:21:33 localhost openstack_network_exporter[248748]: Oct 14 06:21:33 localhost openstack_network_exporter[248748]: ERROR 10:21:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:21:33 localhost openstack_network_exporter[248748]: Oct 14 06:21:33 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 60 KiB/s wr, 53 op/s Oct 14 06:21:33 localhost nova_compute[295778]: 2025-10-14 10:21:33.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:33 localhost nova_compute[295778]: 2025-10-14 10:21:33.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:21:33 localhost nova_compute[295778]: 2025-10-14 10:21:33.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:21:33 localhost nova_compute[295778]: 2025-10-14 10:21:33.929 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:21:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:34 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:34 localhost nova_compute[295778]: 2025-10-14 10:21:34.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "507fac9c-d803-4322-86fc-5518cf276942", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/507fac9c-d803-4322-86fc-5518cf276942/.meta.tmp' Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/507fac9c-d803-4322-86fc-5518cf276942/.meta.tmp' to config b'/volumes/_nogroup/507fac9c-d803-4322-86fc-5518cf276942/.meta' Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "507fac9c-d803-4322-86fc-5518cf276942", "format": "json"}]: dispatch Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:34 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e232 do_prune osdmap full prune enabled Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e233 e233: 6 total, 6 up, 6 in Oct 14 06:21:35 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e233: 6 total, 6 up, 6 in Oct 14 06:21:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "10eb7dd1-f447-49e8-8cbd-0c6c48bc321a", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:10eb7dd1-f447-49e8-8cbd-0c6c48bc321a, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:21:35 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:21:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:21:35 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:21:35 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 2b849fe1-5505-4a17-ba31-f932af0ed411 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:21:35 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 2b849fe1-5505-4a17-ba31-f932af0ed411 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:21:35 localhost ceph-mgr[300442]: [progress INFO root] Completed event 2b849fe1-5505-4a17-ba31-f932af0ed411 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:21:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:21:35 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:21:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:10eb7dd1-f447-49e8-8cbd-0c6c48bc321a, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:21:35 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:21:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 2 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 169 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 124 KiB/s wr, 114 op/s Oct 14 06:21:35 localhost nova_compute[295778]: 2025-10-14 10:21:35.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:35 localhost nova_compute[295778]: 2025-10-14 10:21:35.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "snap_name": "01abc44a-a997-4e83-8516-b295e7091fd4", "format": "json"}]: dispatch Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:36 localhost nova_compute[295778]: 2025-10-14 10:21:36.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e233 do_prune osdmap full prune enabled Oct 14 06:21:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e234 e234: 6 total, 6 up, 6 in Oct 14 06:21:36 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e234: 6 total, 6 up, 6 in Oct 14 06:21:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:21:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:21:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:36 localhost nova_compute[295778]: 2025-10-14 10:21:36.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:36 localhost nova_compute[295778]: 2025-10-14 10:21:36.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:36 localhost nova_compute[295778]: 2025-10-14 10:21:36.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:21:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 2 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 169 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 64 KiB/s wr, 60 op/s Oct 14 06:21:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e234 do_prune osdmap full prune enabled Oct 14 06:21:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e235 e235: 6 total, 6 up, 6 in Oct 14 06:21:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e235: 6 total, 6 up, 6 in Oct 14 06:21:37 localhost nova_compute[295778]: 2025-10-14 10:21:37.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:37 localhost nova_compute[295778]: 2025-10-14 10:21:37.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:21:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd4f2913-4999-4703-b529-06ac3467fb6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd4f2913-4999-4703-b529-06ac3467fb6c/.meta.tmp' Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd4f2913-4999-4703-b529-06ac3467fb6c/.meta.tmp' to config b'/volumes/_nogroup/cd4f2913-4999-4703-b529-06ac3467fb6c/.meta' Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd4f2913-4999-4703-b529-06ac3467fb6c", "format": "json"}]: dispatch Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:38 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "dbba3ffb-946c-4e77-80a5-5f612dede0ba", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:dbba3ffb-946c-4e77-80a5-5f612dede0ba, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:dbba3ffb-946c-4e77-80a5-5f612dede0ba, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3542532493' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:21:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3542532493' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:21:39 localhost nova_compute[295778]: 2025-10-14 10:21:39.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 2 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 169 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 64 KiB/s wr, 60 op/s Oct 14 06:21:39 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:21:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:21:39 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:21:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e235 do_prune osdmap full prune enabled Oct 14 06:21:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e236 e236: 6 total, 6 up, 6 in Oct 14 06:21:40 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e236: 6 total, 6 up, 6 in Oct 14 06:21:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "snap_name": "01abc44a-a997-4e83-8516-b295e7091fd4_5db53287-36c2-452b-ac80-88c2c1be2a8b", "force": true, "format": "json"}]: dispatch Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4_5db53287-36c2-452b-ac80-88c2c1be2a8b, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta' Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4_5db53287-36c2-452b-ac80-88c2c1be2a8b, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "snap_name": "01abc44a-a997-4e83-8516-b295e7091fd4", "force": true, "format": "json"}]: dispatch Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta.tmp' to config b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41/.meta' Oct 14 06:21:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:01abc44a-a997-4e83-8516-b295e7091fd4, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:41 localhost nova_compute[295778]: 2025-10-14 10:21:41.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd4f2913-4999-4703-b529-06ac3467fb6c", "format": "json"}]: dispatch Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd4f2913-4999-4703-b529-06ac3467fb6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd4f2913-4999-4703-b529-06ac3467fb6c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:41.361+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd4f2913-4999-4703-b529-06ac3467fb6c' of type subvolume Oct 14 06:21:41 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd4f2913-4999-4703-b529-06ac3467fb6c' of type subvolume Oct 14 06:21:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd4f2913-4999-4703-b529-06ac3467fb6c", "force": true, "format": "json"}]: dispatch Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd4f2913-4999-4703-b529-06ac3467fb6c'' moved to trashcan Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd4f2913-4999-4703-b529-06ac3467fb6c, vol_name:cephfs) < "" Oct 14 06:21:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 80 KiB/s wr, 99 op/s Oct 14 06:21:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:21:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:21:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:21:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "dbba3ffb-946c-4e77-80a5-5f612dede0ba", "force": true, "format": "json"}]: dispatch Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dbba3ffb-946c-4e77-80a5-5f612dede0ba, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dbba3ffb-946c-4e77-80a5-5f612dede0ba, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:41 localhost podman[344312]: 2025-10-14 10:21:41.566038285 +0000 UTC m=+0.101660055 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 14 06:21:41 localhost podman[344313]: 2025-10-14 10:21:41.611830834 +0000 UTC m=+0.141218569 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller) Oct 14 06:21:41 localhost podman[344314]: 2025-10-14 10:21:41.675303542 +0000 UTC m=+0.202680123 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:21:41 localhost podman[344312]: 2025-10-14 10:21:41.686567151 +0000 UTC m=+0.222188941 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc.) Oct 14 06:21:41 localhost podman[344313]: 2025-10-14 10:21:41.700103722 +0000 UTC m=+0.229491477 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 06:21:41 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:21:41 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:21:41 localhost podman[344314]: 2025-10-14 10:21:41.738393531 +0000 UTC m=+0.265770112 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:21:41 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:21:41 localhost nova_compute[295778]: 2025-10-14 10:21:41.916 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:41 localhost nova_compute[295778]: 2025-10-14 10:21:41.917 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:21:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e236 do_prune osdmap full prune enabled Oct 14 06:21:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e237 e237: 6 total, 6 up, 6 in Oct 14 06:21:42 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e237: 6 total, 6 up, 6 in Oct 14 06:21:42 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:42.728 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:21:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:21:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:21:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 80 KiB/s wr, 99 op/s Oct 14 06:21:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:21:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:21:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:21:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "format": "json"}]: dispatch Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab5b1468-8901-4c3b-ae8f-82b4933bfd41' of type subvolume Oct 14 06:21:43 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:43.956+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab5b1468-8901-4c3b-ae8f-82b4933bfd41' of type subvolume Oct 14 06:21:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab5b1468-8901-4c3b-ae8f-82b4933bfd41", "force": true, "format": "json"}]: dispatch Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ab5b1468-8901-4c3b-ae8f-82b4933bfd41'' moved to trashcan Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab5b1468-8901-4c3b-ae8f-82b4933bfd41, vol_name:cephfs) < "" Oct 14 06:21:44 localhost nova_compute[295778]: 2025-10-14 10:21:44.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "507fac9c-d803-4322-86fc-5518cf276942", "format": "json"}]: dispatch Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:507fac9c-d803-4322-86fc-5518cf276942, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:507fac9c-d803-4322-86fc-5518cf276942, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:44 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:44.531+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '507fac9c-d803-4322-86fc-5518cf276942' of type subvolume Oct 14 06:21:44 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '507fac9c-d803-4322-86fc-5518cf276942' of type subvolume Oct 14 06:21:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "507fac9c-d803-4322-86fc-5518cf276942", "force": true, "format": "json"}]: dispatch Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/507fac9c-d803-4322-86fc-5518cf276942'' moved to trashcan Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:507fac9c-d803-4322-86fc-5518cf276942, vol_name:cephfs) < "" Oct 14 06:21:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e3ee525b-f9df-4c20-8081-8f300b1d0efe", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e3ee525b-f9df-4c20-8081-8f300b1d0efe, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e3ee525b-f9df-4c20-8081-8f300b1d0efe, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e237 do_prune osdmap full prune enabled Oct 14 06:21:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e238 e238: 6 total, 6 up, 6 in Oct 14 06:21:45 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e238: 6 total, 6 up, 6 in Oct 14 06:21:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 176 KiB/s wr, 107 op/s Oct 14 06:21:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "334e9874-cea1-4939-828c-3223ad7dc82a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:46 localhost nova_compute[295778]: 2025-10-14 10:21:46.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334e9874-cea1-4939-828c-3223ad7dc82a/.meta.tmp' Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334e9874-cea1-4939-828c-3223ad7dc82a/.meta.tmp' to config b'/volumes/_nogroup/334e9874-cea1-4939-828c-3223ad7dc82a/.meta' Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "334e9874-cea1-4939-828c-3223ad7dc82a", "format": "json"}]: dispatch Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:46 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:46 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:47 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 144 KiB/s wr, 88 op/s Oct 14 06:21:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:21:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2901869645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:21:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:21:47 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2901869645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:21:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "7adbd736-0276-4e57-b2a0-41f3e4104130", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:7adbd736-0276-4e57-b2a0-41f3e4104130, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:7adbd736-0276-4e57-b2a0-41f3e4104130, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e3ee525b-f9df-4c20-8081-8f300b1d0efe", "force": true, "format": "json"}]: dispatch Oct 14 06:21:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e3ee525b-f9df-4c20-8081-8f300b1d0efe, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e3ee525b-f9df-4c20-8081-8f300b1d0efe, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "22b57abb-7065-4d7e-be09-4f07eb9a5497", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:22b57abb-7065-4d7e-be09-4f07eb9a5497, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:22b57abb-7065-4d7e-be09-4f07eb9a5497, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 72 KiB/s wr, 6 op/s Oct 14 06:21:49 localhost nova_compute[295778]: 2025-10-14 10:21:49.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:21:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e238 do_prune osdmap full prune enabled Oct 14 06:21:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e239 e239: 6 total, 6 up, 6 in Oct 14 06:21:50 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e239: 6 total, 6 up, 6 in Oct 14 06:21:50 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:50 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:50 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "334e9874-cea1-4939-828c-3223ad7dc82a", "format": "json"}]: dispatch Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:334e9874-cea1-4939-828c-3223ad7dc82a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:334e9874-cea1-4939-828c-3223ad7dc82a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:21:50.351+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334e9874-cea1-4939-828c-3223ad7dc82a' of type subvolume Oct 14 06:21:50 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334e9874-cea1-4939-828c-3223ad7dc82a' of type subvolume Oct 14 06:21:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "334e9874-cea1-4939-828c-3223ad7dc82a", "force": true, "format": "json"}]: dispatch Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/334e9874-cea1-4939-828c-3223ad7dc82a'' moved to trashcan Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334e9874-cea1-4939-828c-3223ad7dc82a, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "7adbd736-0276-4e57-b2a0-41f3e4104130", "force": true, "format": "json"}]: dispatch Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:7adbd736-0276-4e57-b2a0-41f3e4104130, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:7adbd736-0276-4e57-b2a0-41f3e4104130, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "fdc90901-b0bf-4113-b315-738b351f8d6c", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:fdc90901-b0bf-4113-b315-738b351f8d6c, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:fdc90901-b0bf-4113-b315-738b351f8d6c, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "22b57abb-7065-4d7e-be09-4f07eb9a5497", "force": true, "format": "json"}]: dispatch Oct 14 06:21:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:22b57abb-7065-4d7e-be09-4f07eb9a5497, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:22b57abb-7065-4d7e-be09-4f07eb9a5497, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:51 localhost nova_compute[295778]: 2025-10-14 10:21:51.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 143 KiB/s wr, 35 op/s Oct 14 06:21:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:21:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:53 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:53 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 137 KiB/s wr, 34 op/s Oct 14 06:21:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:21:53 localhost podman[344382]: 2025-10-14 10:21:53.538277343 +0000 UTC m=+0.081245292 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Oct 14 06:21:53 localhost podman[344382]: 2025-10-14 10:21:53.554538376 +0000 UTC m=+0.097506365 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, tcib_managed=true) Oct 14 06:21:53 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:21:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "dbb46244-089b-47f3-aadf-4feb970001eb", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:dbb46244-089b-47f3-aadf-4feb970001eb, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:dbb46244-089b-47f3-aadf-4feb970001eb, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "fdc90901-b0bf-4113-b315-738b351f8d6c", "force": true, "format": "json"}]: dispatch Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:fdc90901-b0bf-4113-b315-738b351f8d6c, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:fdc90901-b0bf-4113-b315-738b351f8d6c, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e5b8eb22-ebc4-41dd-a1ab-c673f2438111", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e5b8eb22-ebc4-41dd-a1ab-c673f2438111, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e5b8eb22-ebc4-41dd-a1ab-c673f2438111, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:21:54 localhost nova_compute[295778]: 2025-10-14 10:21:54.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:21:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 110 KiB/s wr, 28 op/s Oct 14 06:21:56 localhost nova_compute[295778]: 2025-10-14 10:21:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:21:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:21:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta' Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:21:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "format": "json"}]: dispatch Oct 14 06:21:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:21:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:21:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "dbb46244-089b-47f3-aadf-4feb970001eb", "force": true, "format": "json"}]: dispatch Oct 14 06:21:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dbb46244-089b-47f3-aadf-4feb970001eb, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:dbb46244-089b-47f3-aadf-4feb970001eb, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:21:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:21:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:21:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 110 KiB/s wr, 28 op/s Oct 14 06:21:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e5b8eb22-ebc4-41dd-a1ab-c673f2438111", "force": true, "format": "json"}]: dispatch Oct 14 06:21:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e5b8eb22-ebc4-41dd-a1ab-c673f2438111, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e5b8eb22-ebc4-41dd-a1ab-c673f2438111, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:57.645 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:21:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:57.646 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:21:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:21:57.646 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:21:58 localhost ovn_controller[156286]: 2025-10-14T10:21:58Z|00415|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Oct 14 06:21:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e239 do_prune osdmap full prune enabled Oct 14 06:21:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e240 e240: 6 total, 6 up, 6 in Oct 14 06:21:58 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e240: 6 total, 6 up, 6 in Oct 14 06:21:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "10eb7dd1-f447-49e8-8cbd-0c6c48bc321a", "force": true, "format": "json"}]: dispatch Oct 14 06:21:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:10eb7dd1-f447-49e8-8cbd-0c6c48bc321a, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:10eb7dd1-f447-49e8-8cbd-0c6c48bc321a, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:21:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e240 do_prune osdmap full prune enabled Oct 14 06:21:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e241 e241: 6 total, 6 up, 6 in Oct 14 06:21:59 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e241: 6 total, 6 up, 6 in Oct 14 06:21:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 66 KiB/s wr, 5 op/s Oct 14 06:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:21:59 localhost nova_compute[295778]: 2025-10-14 10:21:59.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:21:59 localhost podman[344404]: 2025-10-14 10:21:59.610174518 +0000 UTC m=+0.092502602 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:21:59 localhost podman[344404]: 2025-10-14 10:21:59.624118729 +0000 UTC m=+0.106446853 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:21:59 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:21:59 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:21:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:21:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:21:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:21:59 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:21:59 localhost podman[344403]: 2025-10-14 10:21:59.714417912 +0000 UTC m=+0.196936941 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:21:59 localhost podman[344403]: 2025-10-14 10:21:59.718938622 +0000 UTC m=+0.201457681 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:21:59 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:21:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:21:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:21:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:21:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "snap_name": "bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a", "format": "json"}]: dispatch Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:00 localhost podman[246584]: time="2025-10-14T10:22:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:22:00 localhost podman[246584]: @ - - [14/Oct/2025:10:22:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:22:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "6c0c4a84-d974-4231-9f14-b8a610957657", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:6c0c4a84-d974-4231-9f14-b8a610957657, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:00 localhost podman[246584]: @ - - [14/Oct/2025:10:22:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18909 "" "Go-http-client/1.1" Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:6c0c4a84-d974-4231-9f14-b8a610957657, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "14ce70b8-5566-4190-a053-f6f7dc23ec0d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/14ce70b8-5566-4190-a053-f6f7dc23ec0d/.meta.tmp' Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/14ce70b8-5566-4190-a053-f6f7dc23ec0d/.meta.tmp' to config b'/volumes/_nogroup/14ce70b8-5566-4190-a053-f6f7dc23ec0d/.meta' Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "14ce70b8-5566-4190-a053-f6f7dc23ec0d", "format": "json"}]: dispatch Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:22:00 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:22:01 localhost nova_compute[295778]: 2025-10-14 10:22:01.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 138 KiB/s wr, 36 op/s Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.425069) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322425132, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2702, "num_deletes": 272, "total_data_size": 2556529, "memory_usage": 2630664, "flush_reason": "Manual Compaction"} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322442240, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2491575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34251, "largest_seqno": 36952, "table_properties": {"data_size": 2479605, "index_size": 7579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 29880, "raw_average_key_size": 22, "raw_value_size": 2454012, "raw_average_value_size": 1867, "num_data_blocks": 323, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437215, "oldest_key_time": 1760437215, "file_creation_time": 1760437322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 17218 microseconds, and 6541 cpu microseconds. Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.442293) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2491575 bytes OK Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.442317) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.444374) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.444396) EVENT_LOG_v1 {"time_micros": 1760437322444389, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.444418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2544159, prev total WAL file size 2544159, number of live WAL files 2. Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.445176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(2433KB)], [63(16MB)] Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322445232, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 20109932, "oldest_snapshot_seqno": -1} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14185 keys, 18486379 bytes, temperature: kUnknown Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322550499, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 18486379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18402524, "index_size": 47274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 380659, "raw_average_key_size": 26, "raw_value_size": 18158497, "raw_average_value_size": 1280, "num_data_blocks": 1769, "num_entries": 14185, "num_filter_entries": 14185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.551030) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 18486379 bytes Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.558013) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.9 rd, 175.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 16.8 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(15.5) write-amplify(7.4) OK, records in: 14736, records dropped: 551 output_compression: NoCompression Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.558042) EVENT_LOG_v1 {"time_micros": 1760437322558028, "job": 38, "event": "compaction_finished", "compaction_time_micros": 105338, "compaction_time_cpu_micros": 48894, "output_level": 6, "num_output_files": 1, "total_output_size": 18486379, "num_input_records": 14736, "num_output_records": 14185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322558546, "job": 38, "event": "table_file_deletion", "file_number": 65} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437322561253, "job": 38, "event": "table_file_deletion", "file_number": 63} Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.445103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.561364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.561373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.561376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.561379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:02 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:02.561382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:22:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:03 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:03 localhost openstack_network_exporter[248748]: ERROR 10:22:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:22:03 localhost openstack_network_exporter[248748]: ERROR 10:22:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:22:03 localhost openstack_network_exporter[248748]: ERROR 10:22:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:22:03 localhost openstack_network_exporter[248748]: ERROR 10:22:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:22:03 localhost openstack_network_exporter[248748]: Oct 14 06:22:03 localhost openstack_network_exporter[248748]: ERROR 10:22:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:22:03 localhost openstack_network_exporter[248748]: Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 73 KiB/s wr, 30 op/s Oct 14 06:22:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:22:03 localhost podman[344444]: 2025-10-14 10:22:03.56232148 +0000 UTC m=+0.088108885 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:22:03 localhost podman[344443]: 2025-10-14 10:22:03.599279043 +0000 UTC m=+0.124786440 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 14 06:22:03 localhost podman[344443]: 2025-10-14 10:22:03.609904196 +0000 UTC m=+0.135411613 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:22:03 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:22:03 localhost podman[344444]: 2025-10-14 10:22:03.625953773 +0000 UTC m=+0.151741178 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:22:03 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "6c0c4a84-d974-4231-9f14-b8a610957657", "force": true, "format": "json"}]: dispatch Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:6c0c4a84-d974-4231-9f14-b8a610957657, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:6c0c4a84-d974-4231-9f14-b8a610957657, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "14ce70b8-5566-4190-a053-f6f7dc23ec0d", "format": "json"}]: dispatch Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:03.940+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14ce70b8-5566-4190-a053-f6f7dc23ec0d' of type subvolume Oct 14 06:22:03 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14ce70b8-5566-4190-a053-f6f7dc23ec0d' of type subvolume Oct 14 06:22:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "14ce70b8-5566-4190-a053-f6f7dc23ec0d", "force": true, "format": "json"}]: dispatch Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/14ce70b8-5566-4190-a053-f6f7dc23ec0d'' moved to trashcan Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14ce70b8-5566-4190-a053-f6f7dc23ec0d, vol_name:cephfs) < "" Oct 14 06:22:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e241 do_prune osdmap full prune enabled Oct 14 06:22:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e242 e242: 6 total, 6 up, 6 in Oct 14 06:22:04 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e242: 6 total, 6 up, 6 in Oct 14 06:22:04 localhost nova_compute[295778]: 2025-10-14 10:22:04.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "snap_name": "bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a_d069e3fd-0961-4d23-9db7-90d16fc1a74e", "force": true, "format": "json"}]: dispatch Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a_d069e3fd-0961-4d23-9db7-90d16fc1a74e, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:04 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta' Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a_d069e3fd-0961-4d23-9db7-90d16fc1a74e, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "snap_name": "bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a", "force": true, "format": "json"}]: dispatch Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta.tmp' to config b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf/.meta' Oct 14 06:22:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bb2b7989-ec27-4b2c-9e47-ce36a8c3dc3a, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.111383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325111436, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 302, "num_deletes": 252, "total_data_size": 59596, "memory_usage": 65888, "flush_reason": "Manual Compaction"} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325114422, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 58735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36953, "largest_seqno": 37254, "table_properties": {"data_size": 56723, "index_size": 187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5856, "raw_average_key_size": 20, "raw_value_size": 52587, "raw_average_value_size": 183, "num_data_blocks": 8, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437323, "oldest_key_time": 1760437323, "file_creation_time": 1760437325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3081 microseconds, and 947 cpu microseconds. Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.114467) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 58735 bytes OK Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.114485) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.115991) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.116007) EVENT_LOG_v1 {"time_micros": 1760437325116001, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.116023) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 57398, prev total WAL file size 57398, number of live WAL files 2. Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.116798) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353130' seq:0, type:0; will stop at (end) Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(57KB)], [66(17MB)] Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325116844, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 18545114, "oldest_snapshot_seqno": -1} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 13952 keys, 16422414 bytes, temperature: kUnknown Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325220826, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 16422414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16345050, "index_size": 41423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34885, "raw_key_size": 376034, "raw_average_key_size": 26, "raw_value_size": 16109870, "raw_average_value_size": 1154, "num_data_blocks": 1523, "num_entries": 13952, "num_filter_entries": 13952, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.221426) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 16422414 bytes Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.223275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.6 rd, 157.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 17.6 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(595.3) write-amplify(279.6) OK, records in: 14471, records dropped: 519 output_compression: NoCompression Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.223304) EVENT_LOG_v1 {"time_micros": 1760437325223292, "job": 40, "event": "compaction_finished", "compaction_time_micros": 104398, "compaction_time_cpu_micros": 48019, "output_level": 6, "num_output_files": 1, "total_output_size": 16422414, "num_input_records": 14471, "num_output_records": 13952, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325223688, "job": 40, "event": "table_file_deletion", "file_number": 68} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437325226438, "job": 40, "event": "table_file_deletion", "file_number": 66} Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.116657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.226474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.226480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.226625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.226631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:22:05.226634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:22:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 143 KiB/s wr, 78 op/s Oct 14 06:22:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:22:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:06 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:06 localhost nova_compute[295778]: 2025-10-14 10:22:06.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e242 do_prune osdmap full prune enabled Oct 14 06:22:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e243 e243: 6 total, 6 up, 6 in Oct 14 06:22:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e243: 6 total, 6 up, 6 in Oct 14 06:22:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "da649294-b614-4aa1-958e-3448ee0b4447", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:da649294-b614-4aa1-958e-3448ee0b4447, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:da649294-b614-4aa1-958e-3448ee0b4447, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "21061483-cc6d-4575-b0f4-95e330014fbe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/21061483-cc6d-4575-b0f4-95e330014fbe/.meta.tmp' Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/21061483-cc6d-4575-b0f4-95e330014fbe/.meta.tmp' to config b'/volumes/_nogroup/21061483-cc6d-4575-b0f4-95e330014fbe/.meta' Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "21061483-cc6d-4575-b0f4-95e330014fbe", "format": "json"}]: dispatch Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:22:07 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:22:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 127 KiB/s wr, 69 op/s Oct 14 06:22:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e243 do_prune osdmap full prune enabled Oct 14 06:22:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e244 e244: 6 total, 6 up, 6 in Oct 14 06:22:07 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e244: 6 total, 6 up, 6 in Oct 14 06:22:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "format": "json"}]: dispatch Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:07.934+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02ff21d3-7e7b-42be-b4fe-1162422d5daf' of type subvolume Oct 14 06:22:07 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02ff21d3-7e7b-42be-b4fe-1162422d5daf' of type subvolume Oct 14 06:22:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "02ff21d3-7e7b-42be-b4fe-1162422d5daf", "force": true, "format": "json"}]: dispatch Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/02ff21d3-7e7b-42be-b4fe-1162422d5daf'' moved to trashcan Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02ff21d3-7e7b-42be-b4fe-1162422d5daf, vol_name:cephfs) < "" Oct 14 06:22:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:22:09 Oct 14 06:22:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:22:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:22:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'vms', 'backups', 'manila_data', 'images', 'volumes'] Oct 14 06:22:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 73 KiB/s wr, 51 op/s Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014877447131490787 of space, bias 1.0, pg target 0.2970530277254327 quantized to 32 (current 32) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32) Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:22:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008626029452819654 of space, bias 4.0, pg target 0.6866319444444445 quantized to 16 (current 16) Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:22:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:22:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e244 do_prune osdmap full prune enabled Oct 14 06:22:09 localhost nova_compute[295778]: 2025-10-14 10:22:09.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e245 e245: 6 total, 6 up, 6 in Oct 14 06:22:09 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e245: 6 total, 6 up, 6 in Oct 14 06:22:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:22:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "da649294-b614-4aa1-958e-3448ee0b4447", "force": true, "format": "json"}]: dispatch Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:da649294-b614-4aa1-958e-3448ee0b4447, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:da649294-b614-4aa1-958e-3448ee0b4447, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "21061483-cc6d-4575-b0f4-95e330014fbe", "format": "json"}]: dispatch Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:21061483-cc6d-4575-b0f4-95e330014fbe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:21061483-cc6d-4575-b0f4-95e330014fbe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:10.540+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '21061483-cc6d-4575-b0f4-95e330014fbe' of type subvolume Oct 14 06:22:10 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '21061483-cc6d-4575-b0f4-95e330014fbe' of type subvolume Oct 14 06:22:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "21061483-cc6d-4575-b0f4-95e330014fbe", "force": true, "format": "json"}]: dispatch Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/21061483-cc6d-4575-b0f4-95e330014fbe'' moved to trashcan Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:21061483-cc6d-4575-b0f4-95e330014fbe, vol_name:cephfs) < "" Oct 14 06:22:10 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:10 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:10 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:11 localhost nova_compute[295778]: 2025-10-14 10:22:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 105 KiB/s wr, 127 op/s Oct 14 06:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:22:12 localhost podman[344486]: 2025-10-14 10:22:12.547682733 +0000 UTC m=+0.083012360 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:22:12 localhost podman[344486]: 2025-10-14 10:22:12.562297532 +0000 UTC m=+0.097627169 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:22:12 localhost podman[344484]: 2025-10-14 10:22:12.59496311 +0000 UTC m=+0.136557313 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible) Oct 14 06:22:12 localhost podman[344484]: 2025-10-14 10:22:12.611154961 +0000 UTC m=+0.152749164 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm) Oct 14 06:22:12 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:22:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e245 do_prune osdmap full prune enabled Oct 14 06:22:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e246 e246: 6 total, 6 up, 6 in Oct 14 06:22:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e246: 6 total, 6 up, 6 in Oct 14 06:22:12 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:22:12 localhost systemd[1]: tmp-crun.qhp2Y9.mount: Deactivated successfully. Oct 14 06:22:12 localhost podman[344485]: 2025-10-14 10:22:12.758229384 +0000 UTC m=+0.297210337 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 06:22:12 localhost podman[344485]: 2025-10-14 10:22:12.795070974 +0000 UTC m=+0.334051947 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:22:12 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:22:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:22:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:13 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "d3b05f8f-c81a-45ae-82ff-0e59746dde12", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:d3b05f8f-c81a-45ae-82ff-0e59746dde12, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:d3b05f8f-c81a-45ae-82ff-0e59746dde12, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:22:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 105 KiB/s wr, 127 op/s Oct 14 06:22:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e246 do_prune osdmap full prune enabled Oct 14 06:22:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:13 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e247 e247: 6 total, 6 up, 6 in Oct 14 06:22:13 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e247: 6 total, 6 up, 6 in Oct 14 06:22:14 localhost nova_compute[295778]: 2025-10-14 10:22:14.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e247 do_prune osdmap full prune enabled Oct 14 06:22:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e248 e248: 6 total, 6 up, 6 in Oct 14 06:22:15 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e248: 6 total, 6 up, 6 in Oct 14 06:22:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 180 KiB/s wr, 185 op/s Oct 14 06:22:16 localhost nova_compute[295778]: 2025-10-14 10:22:16.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "d3b05f8f-c81a-45ae-82ff-0e59746dde12", "force": true, "format": "json"}]: dispatch Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d3b05f8f-c81a-45ae-82ff-0e59746dde12, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d3b05f8f-c81a-45ae-82ff-0e59746dde12, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:22:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:22:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:16 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e248 do_prune osdmap full prune enabled Oct 14 06:22:17 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:17 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:17 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e249 e249: 6 total, 6 up, 6 in Oct 14 06:22:17 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e249: 6 total, 6 up, 6 in Oct 14 06:22:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 88 KiB/s wr, 66 op/s Oct 14 06:22:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:22:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2770588864' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:22:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:22:17 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2770588864' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:22:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 70 KiB/s wr, 52 op/s Oct 14 06:22:19 localhost nova_compute[295778]: 2025-10-14 10:22:19.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:22:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:19 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:20 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:21 localhost nova_compute[295778]: 2025-10-14 10:22:21.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 114 KiB/s wr, 93 op/s Oct 14 06:22:22 localhost nova_compute[295778]: 2025-10-14 10:22:22.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:22.462 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:22:22 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:22.463 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:22:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:22:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 106 KiB/s wr, 86 op/s Oct 14 06:22:23 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:23.465 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:22:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "855bae17-737e-4787-8aeb-683acb2cee52", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:23 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/855bae17-737e-4787-8aeb-683acb2cee52/.meta.tmp' Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/855bae17-737e-4787-8aeb-683acb2cee52/.meta.tmp' to config b'/volumes/_nogroup/855bae17-737e-4787-8aeb-683acb2cee52/.meta' Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "855bae17-737e-4787-8aeb-683acb2cee52", "format": "json"}]: dispatch Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:23 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:22:23 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:22:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1dff844a-02a7-4aba-b6ad-72a8742ac52f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1dff844a-02a7-4aba-b6ad-72a8742ac52f/.meta.tmp' Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1dff844a-02a7-4aba-b6ad-72a8742ac52f/.meta.tmp' to config b'/volumes/_nogroup/1dff844a-02a7-4aba-b6ad-72a8742ac52f/.meta' Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1dff844a-02a7-4aba-b6ad-72a8742ac52f", "format": "json"}]: dispatch Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:24 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:22:24 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:22:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:22:24 localhost systemd[1]: tmp-crun.DbK9hy.mount: Deactivated successfully. Oct 14 06:22:24 localhost podman[344552]: 2025-10-14 10:22:24.543174086 +0000 UTC m=+0.087626623 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:22:24 localhost podman[344552]: 2025-10-14 10:22:24.551137927 +0000 UTC m=+0.095590504 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009) Oct 14 06:22:24 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:22:24 localhost nova_compute[295778]: 2025-10-14 10:22:24.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e249 do_prune osdmap full prune enabled Oct 14 06:22:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 e250: 6 total, 6 up, 6 in Oct 14 06:22:25 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e250: 6 total, 6 up, 6 in Oct 14 06:22:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 91 KiB/s wr, 64 op/s Oct 14 06:22:26 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:22:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:22:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:26 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:26 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:26 localhost nova_compute[295778]: 2025-10-14 10:22:26.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:26 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 75 KiB/s wr, 53 op/s Oct 14 06:22:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1dff844a-02a7-4aba-b6ad-72a8742ac52f", "format": "json"}]: dispatch Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:27 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:27.616+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dff844a-02a7-4aba-b6ad-72a8742ac52f' of type subvolume Oct 14 06:22:27 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dff844a-02a7-4aba-b6ad-72a8742ac52f' of type subvolume Oct 14 06:22:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1dff844a-02a7-4aba-b6ad-72a8742ac52f", "force": true, "format": "json"}]: dispatch Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1dff844a-02a7-4aba-b6ad-72a8742ac52f'' moved to trashcan Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dff844a-02a7-4aba-b6ad-72a8742ac52f, vol_name:cephfs) < "" Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.921 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.922 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.923 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:22:27 localhost nova_compute[295778]: 2025-10-14 10:22:27.923 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:22:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:22:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/21741527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:22:28 localhost nova_compute[295778]: 2025-10-14 10:22:28.446 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:22:28 localhost nova_compute[295778]: 2025-10-14 10:22:28.663 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:22:28 localhost nova_compute[295778]: 2025-10-14 10:22:28.665 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11318MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:22:28 localhost nova_compute[295778]: 2025-10-14 10:22:28.666 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:22:28 localhost nova_compute[295778]: 2025-10-14 10:22:28.666 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:22:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "855bae17-737e-4787-8aeb-683acb2cee52", "format": "json"}]: dispatch Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:855bae17-737e-4787-8aeb-683acb2cee52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:855bae17-737e-4787-8aeb-683acb2cee52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:28 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:28.774+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '855bae17-737e-4787-8aeb-683acb2cee52' of type subvolume Oct 14 06:22:28 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '855bae17-737e-4787-8aeb-683acb2cee52' of type subvolume Oct 14 06:22:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "855bae17-737e-4787-8aeb-683acb2cee52", "force": true, "format": "json"}]: dispatch Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/855bae17-737e-4787-8aeb-683acb2cee52'' moved to trashcan Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:855bae17-737e-4787-8aeb-683acb2cee52, vol_name:cephfs) < "" Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.000 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.001 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.022 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:22:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:22:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 75 KiB/s wr, 53 op/s Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:22:29 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1836537014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.479 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.484 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:22:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:22:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:22:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.520 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.521 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.522 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:22:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:22:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:22:29 localhost nova_compute[295778]: 2025-10-14 10:22:29.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:22:30 localhost systemd[1]: tmp-crun.U3Vwky.mount: Deactivated successfully. Oct 14 06:22:30 localhost podman[344618]: 2025-10-14 10:22:30.554040511 +0000 UTC m=+0.093804513 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:22:30 localhost podman[344618]: 2025-10-14 10:22:30.567138708 +0000 UTC m=+0.106902730 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:22:30 localhost podman[246584]: time="2025-10-14T10:22:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:22:30 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:22:30 localhost podman[344617]: 2025-10-14 10:22:30.608226905 +0000 UTC m=+0.151993254 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 14 06:22:30 localhost podman[246584]: @ - - [14/Oct/2025:10:22:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:22:30 localhost podman[344617]: 2025-10-14 10:22:30.691139749 +0000 UTC m=+0.234906118 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:22:30 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:22:30 localhost podman[246584]: @ - - [14/Oct/2025:10:22:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18911 "" "Go-http-client/1.1" Oct 14 06:22:31 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:31.080 270389 INFO neutron.agent.linux.ip_lib [None req-22e6c264-3d66-4578-bf74-cd609c040efe - - - - - -] Device tap463914a3-bc cannot be used as it has no MAC address#033[00m Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost kernel: device tap463914a3-bc entered promiscuous mode Oct 14 06:22:31 localhost ovn_controller[156286]: 2025-10-14T10:22:31Z|00416|binding|INFO|Claiming lport 463914a3-bc7c-43a3-b544-baedda608a47 for this chassis. Oct 14 06:22:31 localhost NetworkManager[5972]: [1760437351.1112] manager: (tap463914a3-bc): new Generic device (/org/freedesktop/NetworkManager/Devices/73) Oct 14 06:22:31 localhost ovn_controller[156286]: 2025-10-14T10:22:31Z|00417|binding|INFO|463914a3-bc7c-43a3-b544-baedda608a47: Claiming unknown Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost systemd-udevd[344669]: Network interface NamePolicy= disabled on kernel command line. Oct 14 06:22:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:31.126 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-394d67df-5390-48c3-8dbd-d09f9af967e5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-394d67df-5390-48c3-8dbd-d09f9af967e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70baa45d313242eeba98f08d3412af48', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c16beb1-7f30-4ee6-92ee-194b9077ad11, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=463914a3-bc7c-43a3-b544-baedda608a47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:22:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:31.128 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 463914a3-bc7c-43a3-b544-baedda608a47 in datapath 394d67df-5390-48c3-8dbd-d09f9af967e5 bound to our chassis#033[00m Oct 14 06:22:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:31.130 161932 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 394d67df-5390-48c3-8dbd-d09f9af967e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 14 06:22:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:31.134 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbdfa8d-fe7d-4983-9452-628f9869a2a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost ovn_controller[156286]: 2025-10-14T10:22:31Z|00418|binding|INFO|Setting lport 463914a3-bc7c-43a3-b544-baedda608a47 ovn-installed in OVS Oct 14 06:22:31 localhost ovn_controller[156286]: 2025-10-14T10:22:31Z|00419|binding|INFO|Setting lport 463914a3-bc7c-43a3-b544-baedda608a47 up in Southbound Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost journal[236030]: ethtool ioctl error on tap463914a3-bc: No such device Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost nova_compute[295778]: 2025-10-14 10:22:31.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 69 op/s Oct 14 06:22:32 localhost podman[344740]: Oct 14 06:22:32 localhost podman[344740]: 2025-10-14 10:22:32.103441055 +0000 UTC m=+0.089458439 container create 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:22:32 localhost systemd[1]: Started libpod-conmon-2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57.scope. Oct 14 06:22:32 localhost podman[344740]: 2025-10-14 10:22:32.058903046 +0000 UTC m=+0.044920440 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 14 06:22:32 localhost systemd[1]: Started libcrun container. Oct 14 06:22:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a591d6e5e67a89be423b20cba777761a5713bdf2211b6aecdc73b6e5fe66d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 14 06:22:32 localhost podman[344740]: 2025-10-14 10:22:32.179785516 +0000 UTC m=+0.165802890 container init 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:22:32 localhost podman[344740]: 2025-10-14 10:22:32.234270018 +0000 UTC m=+0.220287382 container start 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:22:32 localhost dnsmasq[344758]: started, version 2.85 cachesize 150 Oct 14 06:22:32 localhost dnsmasq[344758]: DNS service limited to local subnets Oct 14 06:22:32 localhost dnsmasq[344758]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 14 06:22:32 localhost dnsmasq[344758]: warning: no upstream servers configured Oct 14 06:22:32 localhost dnsmasq-dhcp[344758]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 14 06:22:32 localhost dnsmasq[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/addn_hosts - 0 addresses Oct 14 06:22:32 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/host Oct 14 06:22:32 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/opts Oct 14 06:22:32 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:32.367 270389 INFO neutron.agent.dhcp.agent [None req-1b83ba0d-8ba6-46c3-a2f6-60fa2665eb4e - - - - - -] DHCP configuration for ports {'c531f8c4-8c93-478c-ad3f-f16b6d3c66a8'} is completed#033[00m Oct 14 06:22:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:22:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:22:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:33 localhost openstack_network_exporter[248748]: ERROR 10:22:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:22:33 localhost openstack_network_exporter[248748]: ERROR 10:22:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:22:33 localhost openstack_network_exporter[248748]: ERROR 10:22:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:22:33 localhost openstack_network_exporter[248748]: ERROR 10:22:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:22:33 localhost openstack_network_exporter[248748]: Oct 14 06:22:33 localhost openstack_network_exporter[248748]: ERROR 10:22:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:22:33 localhost openstack_network_exporter[248748]: Oct 14 06:22:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 69 op/s Oct 14 06:22:33 localhost nova_compute[295778]: 2025-10-14 10:22:33.523 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:33 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:33.857 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:22:33Z, description=, device_id=e15859bf-e697-477d-933b-0ed8a123bd13, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c84abc96-9288-4af5-b1ec-dc73ed9e3cd5, ip_allocation=immediate, mac_address=fa:16:3e:f8:c9:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:22:29Z, description=, dns_domain=, id=394d67df-5390-48c3-8dbd-d09f9af967e5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-520585575-network, port_security_enabled=True, project_id=70baa45d313242eeba98f08d3412af48, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13917, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3565, status=ACTIVE, subnets=['a064c24b-b25f-4e53-bf1b-9abdeba84717'], tags=[], tenant_id=70baa45d313242eeba98f08d3412af48, updated_at=2025-10-14T10:22:30Z, vlan_transparent=None, network_id=394d67df-5390-48c3-8dbd-d09f9af967e5, port_security_enabled=False, project_id=70baa45d313242eeba98f08d3412af48, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3587, status=DOWN, tags=[], tenant_id=70baa45d313242eeba98f08d3412af48, updated_at=2025-10-14T10:22:33Z on network 394d67df-5390-48c3-8dbd-d09f9af967e5#033[00m Oct 14 06:22:34 localhost dnsmasq[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/addn_hosts - 1 addresses Oct 14 06:22:34 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/host Oct 14 06:22:34 localhost podman[344776]: 2025-10-14 10:22:34.073794841 +0000 UTC m=+0.071323099 container kill 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 14 06:22:34 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/opts Oct 14 06:22:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:22:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:22:34 localhost systemd[1]: tmp-crun.SzPCTI.mount: Deactivated successfully. Oct 14 06:22:34 localhost podman[344789]: 2025-10-14 10:22:34.181626704 +0000 UTC m=+0.088207805 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:22:34 localhost podman[344789]: 2025-10-14 10:22:34.220210485 +0000 UTC m=+0.126791616 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:22:34 localhost podman[344787]: 2025-10-14 10:22:34.240468031 +0000 UTC m=+0.148971093 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Oct 14 06:22:34 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:22:34 localhost podman[344787]: 2025-10-14 10:22:34.247366714 +0000 UTC m=+0.155869756 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:22:34 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:22:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:34.327 270389 INFO neutron.agent.dhcp.agent [None req-76bc7df9-e60e-42ad-8267-b2056296160f - - - - - -] DHCP configuration for ports {'c84abc96-9288-4af5-b1ec-dc73ed9e3cd5'} is completed#033[00m Oct 14 06:22:34 localhost nova_compute[295778]: 2025-10-14 10:22:34.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:34 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:34.987 270389 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-14T10:22:33Z, description=, device_id=e15859bf-e697-477d-933b-0ed8a123bd13, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c84abc96-9288-4af5-b1ec-dc73ed9e3cd5, ip_allocation=immediate, mac_address=fa:16:3e:f8:c9:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-14T10:22:29Z, description=, dns_domain=, id=394d67df-5390-48c3-8dbd-d09f9af967e5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-520585575-network, port_security_enabled=True, project_id=70baa45d313242eeba98f08d3412af48, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13917, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3565, status=ACTIVE, subnets=['a064c24b-b25f-4e53-bf1b-9abdeba84717'], tags=[], tenant_id=70baa45d313242eeba98f08d3412af48, updated_at=2025-10-14T10:22:30Z, vlan_transparent=None, network_id=394d67df-5390-48c3-8dbd-d09f9af967e5, port_security_enabled=False, project_id=70baa45d313242eeba98f08d3412af48, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3587, status=DOWN, tags=[], tenant_id=70baa45d313242eeba98f08d3412af48, updated_at=2025-10-14T10:22:33Z on network 394d67df-5390-48c3-8dbd-d09f9af967e5#033[00m Oct 14 06:22:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:35 localhost dnsmasq[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/addn_hosts - 1 addresses Oct 14 06:22:35 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/host Oct 14 06:22:35 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/opts Oct 14 06:22:35 localhost podman[344850]: 2025-10-14 10:22:35.212875285 +0000 UTC m=+0.059601548 container kill 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009) Oct 14 06:22:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 78 op/s Oct 14 06:22:35 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:35.496 270389 INFO neutron.agent.dhcp.agent [None req-cc803c4d-ec79-42dd-844a-cfed907506e8 - - - - - -] DHCP configuration for ports {'c84abc96-9288-4af5-b1ec-dc73ed9e3cd5'} is completed#033[00m Oct 14 06:22:35 localhost nova_compute[295778]: 2025-10-14 10:22:35.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:35 localhost nova_compute[295778]: 2025-10-14 10:22:35.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:35 localhost nova_compute[295778]: 2025-10-14 10:22:35.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:22:35 localhost nova_compute[295778]: 2025-10-14 10:22:35.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:22:35 localhost nova_compute[295778]: 2025-10-14 10:22:35.919 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:22:36 localhost ovn_controller[156286]: 2025-10-14T10:22:36Z|00420|ovn_bfd|INFO|Enabled BFD on interface ovn-31b4da-0 Oct 14 06:22:36 localhost ovn_controller[156286]: 2025-10-14T10:22:36Z|00421|ovn_bfd|INFO|Enabled BFD on interface ovn-953af5-0 Oct 14 06:22:36 localhost ovn_controller[156286]: 2025-10-14T10:22:36Z|00422|ovn_bfd|INFO|Enabled BFD on interface ovn-4e3575-0 Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost nova_compute[295778]: 2025-10-14 10:22:36.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:22:36 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 93f6f5f0-5b1d-40cf-9bba-aa93a08eb176 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:22:36 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 93f6f5f0-5b1d-40cf-9bba-aa93a08eb176 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:22:36 localhost ceph-mgr[300442]: [progress INFO root] Completed event 93f6f5f0-5b1d-40cf-9bba-aa93a08eb176 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:22:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:22:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:22:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:22:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:22:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:22:37 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:22:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 56 op/s Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:37 localhost nova_compute[295778]: 2025-10-14 10:22:37.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:22:38 localhost nova_compute[295778]: 2025-10-14 10:22:38.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:22:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:39 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 56 op/s Oct 14 06:22:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:39 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:22:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:22:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:22:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:39 localhost nova_compute[295778]: 2025-10-14 10:22:39.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:39 localhost nova_compute[295778]: 2025-10-14 10:22:39.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:22:41 localhost nova_compute[295778]: 2025-10-14 10:22:41.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 61 op/s Oct 14 06:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:22:41 localhost ceph-osd[31330]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 21K writes, 85K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 21K writes, 7366 syncs, 2.89 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 49K keys, 12K commit groups, 1.0 writes per commit group, ingest: 39.19 MB, 0.07 MB/s#012Interval WAL: 12K writes, 5332 syncs, 2.35 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:22:41 localhost nova_compute[295778]: 2025-10-14 10:22:41.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:41 localhost nova_compute[295778]: 2025-10-14 10:22:41.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:22:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:22:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3098247783' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:22:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:22:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3098247783' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:22:42 localhost ovn_controller[156286]: 2025-10-14T10:22:42Z|00423|ovn_bfd|INFO|Disabled BFD on interface ovn-31b4da-0 Oct 14 06:22:42 localhost ovn_controller[156286]: 2025-10-14T10:22:42Z|00424|ovn_bfd|INFO|Disabled BFD on interface ovn-953af5-0 Oct 14 06:22:42 localhost ovn_controller[156286]: 2025-10-14T10:22:42Z|00425|ovn_bfd|INFO|Disabled BFD on interface ovn-4e3575-0 Oct 14 06:22:42 localhost nova_compute[295778]: 2025-10-14 10:22:42.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:42 localhost nova_compute[295778]: 2025-10-14 10:22:42.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:42 localhost nova_compute[295778]: 2025-10-14 10:22:42.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:42 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:22:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:42 localhost dnsmasq[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/addn_hosts - 0 addresses Oct 14 06:22:42 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/host Oct 14 06:22:42 localhost dnsmasq-dhcp[344758]: read /var/lib/neutron/dhcp/394d67df-5390-48c3-8dbd-d09f9af967e5/opts Oct 14 06:22:42 localhost podman[344976]: 2025-10-14 10:22:42.795451616 +0000 UTC m=+0.079980277 container kill 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:22:42 localhost systemd[1]: tmp-crun.WY6wbK.mount: Deactivated successfully. Oct 14 06:22:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:42 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:42 localhost systemd[1]: tmp-crun.6AEg9O.mount: Deactivated successfully. Oct 14 06:22:42 localhost podman[344989]: 2025-10-14 10:22:42.928086816 +0000 UTC m=+0.096107094 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 06:22:42 localhost podman[344990]: 2025-10-14 10:22:42.945837747 +0000 UTC m=+0.106762947 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:22:42 localhost podman[344989]: 2025-10-14 10:22:42.973058636 +0000 UTC m=+0.141078914 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7) Oct 14 06:22:42 localhost podman[344996]: 2025-10-14 10:22:42.990805407 +0000 UTC m=+0.144244379 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3) Oct 14 06:22:42 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:22:43 localhost podman[344990]: 2025-10-14 10:22:43.045779981 +0000 UTC m=+0.206705211 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:22:43 localhost ovn_controller[156286]: 2025-10-14T10:22:43Z|00426|binding|INFO|Releasing lport 463914a3-bc7c-43a3-b544-baedda608a47 from this chassis (sb_readonly=0) Oct 14 06:22:43 localhost ovn_controller[156286]: 2025-10-14T10:22:43Z|00427|binding|INFO|Setting lport 463914a3-bc7c-43a3-b544-baedda608a47 down in Southbound Oct 14 06:22:43 localhost nova_compute[295778]: 2025-10-14 10:22:43.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:43 localhost kernel: device tap463914a3-bc left promiscuous mode Oct 14 06:22:43 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:22:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:43.060 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005486731.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc01f64ea-2f05-593f-b3d0-f0dbdfc9210e-394d67df-5390-48c3-8dbd-d09f9af967e5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-394d67df-5390-48c3-8dbd-d09f9af967e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70baa45d313242eeba98f08d3412af48', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005486731.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c16beb1-7f30-4ee6-92ee-194b9077ad11, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=463914a3-bc7c-43a3-b544-baedda608a47) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:22:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:43.062 161932 INFO neutron.agent.ovn.metadata.agent [-] Port 463914a3-bc7c-43a3-b544-baedda608a47 in datapath 394d67df-5390-48c3-8dbd-d09f9af967e5 unbound from our chassis#033[00m Oct 14 06:22:43 localhost nova_compute[295778]: 2025-10-14 10:22:43.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:43.067 161932 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 394d67df-5390-48c3-8dbd-d09f9af967e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 14 06:22:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:43.069 320313 DEBUG oslo.privsep.daemon [-] privsep: reply[3b179c85-8572-4ce7-b4eb-1fda5941fd21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 14 06:22:43 localhost nova_compute[295778]: 2025-10-14 10:22:43.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:43 localhost podman[344996]: 2025-10-14 10:22:43.074208254 +0000 UTC m=+0.227647216 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller) Oct 14 06:22:43 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:22:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.5 KiB/s rd, 60 KiB/s wr, 14 op/s Oct 14 06:22:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:43 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:44 localhost dnsmasq[344758]: exiting on receipt of SIGTERM Oct 14 06:22:44 localhost podman[345080]: 2025-10-14 10:22:44.338296398 +0000 UTC m=+0.063478671 container kill 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:22:44 localhost systemd[1]: libpod-2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57.scope: Deactivated successfully. Oct 14 06:22:44 localhost podman[345093]: 2025-10-14 10:22:44.41923039 +0000 UTC m=+0.066232464 container died 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:22:44 localhost podman[345093]: 2025-10-14 10:22:44.452071188 +0000 UTC m=+0.099073192 container cleanup 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 14 06:22:44 localhost systemd[1]: libpod-conmon-2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57.scope: Deactivated successfully. Oct 14 06:22:44 localhost podman[345095]: 2025-10-14 10:22:44.49976403 +0000 UTC m=+0.137744206 container remove 2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-394d67df-5390-48c3-8dbd-d09f9af967e5, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 14 06:22:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:44.526 270389 INFO neutron.agent.dhcp.agent [None req-c040894a-81eb-4e88-8c73-68a323b8bac9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:22:44 localhost neutron_dhcp_agent[270385]: 2025-10-14 10:22:44.527 270389 INFO neutron.agent.dhcp.agent [None req-c040894a-81eb-4e88-8c73-68a323b8bac9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 14 06:22:44 localhost nova_compute[295778]: 2025-10-14 10:22:44.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:44 localhost nova_compute[295778]: 2025-10-14 10:22:44.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:44 localhost systemd[1]: var-lib-containers-storage-overlay-b1a591d6e5e67a89be423b20cba777761a5713bdf2211b6aecdc73b6e5fe66d1-merged.mount: Deactivated successfully. Oct 14 06:22:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a1a0ca676b9a7c6add3081c6eee7fbf420a049aa70b5a7ba4b33c48bef08c57-userdata-shm.mount: Deactivated successfully. Oct 14 06:22:44 localhost systemd[1]: run-netns-qdhcp\x2d394d67df\x2d5390\x2d48c3\x2d8dbd\x2dd09f9af967e5.mount: Deactivated successfully. Oct 14 06:22:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "01d81bd7-5977-42bd-b754-479f976b180f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/01d81bd7-5977-42bd-b754-479f976b180f/.meta.tmp' Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/01d81bd7-5977-42bd-b754-479f976b180f/.meta.tmp' to config b'/volumes/_nogroup/01d81bd7-5977-42bd-b754-479f976b180f/.meta' Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "01d81bd7-5977-42bd-b754-479f976b180f", "format": "json"}]: dispatch Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:22:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:22:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:22:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1016538579' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:22:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:22:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1016538579' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:22:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 81 KiB/s wr, 44 op/s Oct 14 06:22:46 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:22:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:46 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:22:46 localhost ceph-osd[32282]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 18K writes, 72K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 18K writes, 6492 syncs, 2.88 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 21.20 MB, 0.04 MB/s#012Interval WAL: 11K writes, 4853 syncs, 2.38 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 14 06:22:46 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:46 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:46 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:46 localhost nova_compute[295778]: 2025-10-14 10:22:46.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:46 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:46 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:46 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 34 op/s Oct 14 06:22:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:22:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2354815628' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:22:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:22:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2354815628' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:22:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "01d81bd7-5977-42bd-b754-479f976b180f", "format": "json"}]: dispatch Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:01d81bd7-5977-42bd-b754-479f976b180f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:01d81bd7-5977-42bd-b754-479f976b180f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:22:48 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:22:48.902+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01d81bd7-5977-42bd-b754-479f976b180f' of type subvolume Oct 14 06:22:48 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01d81bd7-5977-42bd-b754-479f976b180f' of type subvolume Oct 14 06:22:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "01d81bd7-5977-42bd-b754-479f976b180f", "force": true, "format": "json"}]: dispatch Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/01d81bd7-5977-42bd-b754-479f976b180f'' moved to trashcan Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:22:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01d81bd7-5977-42bd-b754-479f976b180f, vol_name:cephfs) < "" Oct 14 06:22:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 34 op/s Oct 14 06:22:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:22:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:22:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:49 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:49 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:49 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:22:49 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:22:49 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:22:49 localhost nova_compute[295778]: 2025-10-14 10:22:49.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:22:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:22:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:51 localhost nova_compute[295778]: 2025-10-14 10:22:51.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 94 KiB/s wr, 38 op/s Oct 14 06:22:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:22:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:52 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:52 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:52 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:52 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:52 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 58 KiB/s wr, 33 op/s Oct 14 06:22:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:53 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:54 localhost nova_compute[295778]: 2025-10-14 10:22:54.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:22:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:22:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 80 KiB/s wr, 37 op/s Oct 14 06:22:55 localhost podman[345125]: 2025-10-14 10:22:55.555117358 +0000 UTC m=+0.090976570 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Oct 14 06:22:55 localhost podman[345125]: 2025-10-14 10:22:55.570056323 +0000 UTC m=+0.105915515 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:22:55 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:22:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:56 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:22:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:56 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:56 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:22:56 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:22:56 localhost nova_compute[295778]: 2025-10-14 10:22:56.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:22:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:22:57 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:22:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 59 KiB/s wr, 7 op/s Oct 14 06:22:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:57.646 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:22:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:57.647 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:22:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:22:57.647 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:22:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 59 KiB/s wr, 7 op/s Oct 14 06:22:59 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:22:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:22:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:22:59 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:22:59 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:22:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:22:59 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:22:59 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:22:59 localhost nova_compute[295778]: 2025-10-14 10:22:59.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:23:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:00 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:00 localhost podman[246584]: time="2025-10-14T10:23:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:23:00 localhost podman[246584]: @ - - [14/Oct/2025:10:23:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:23:00 localhost podman[246584]: @ - - [14/Oct/2025:10:23:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18906 "" "Go-http-client/1.1" Oct 14 06:23:01 localhost nova_compute[295778]: 2025-10-14 10:23:01.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:23:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 96 KiB/s wr, 39 op/s Oct 14 06:23:01 localhost podman[345145]: 2025-10-14 10:23:01.550580785 +0000 UTC m=+0.088351109 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:23:01 localhost podman[345145]: 2025-10-14 10:23:01.562064189 +0000 UTC m=+0.099834563 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:23:01 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:23:01 localhost podman[345146]: 2025-10-14 10:23:01.651386693 +0000 UTC m=+0.186688712 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:23:01 localhost podman[345146]: 2025-10-14 10:23:01.664162702 +0000 UTC m=+0.199464701 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:23:01 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:23:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 14 06:23:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:23:02 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 14 06:23:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:23:02 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:02 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice", "format": "json"}]: dispatch Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:02 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:03 localhost openstack_network_exporter[248748]: ERROR 10:23:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:23:03 localhost openstack_network_exporter[248748]: ERROR 10:23:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:23:03 localhost openstack_network_exporter[248748]: ERROR 10:23:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:23:03 localhost openstack_network_exporter[248748]: ERROR 10:23:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:23:03 localhost openstack_network_exporter[248748]: Oct 14 06:23:03 localhost openstack_network_exporter[248748]: ERROR 10:23:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:23:03 localhost openstack_network_exporter[248748]: Oct 14 06:23:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 59 KiB/s wr, 35 op/s Oct 14 06:23:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 14 06:23:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 14 06:23:03 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 14 06:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:23:04 localhost podman[345188]: 2025-10-14 10:23:04.542473415 +0000 UTC m=+0.080050800 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:23:04 localhost podman[345187]: 2025-10-14 10:23:04.586536842 +0000 UTC m=+0.131150202 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 14 06:23:04 localhost podman[345187]: 2025-10-14 10:23:04.601141028 +0000 UTC m=+0.145754368 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 14 06:23:04 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:23:04 localhost podman[345188]: 2025-10-14 10:23:04.639619846 +0000 UTC m=+0.177197161 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd) Oct 14 06:23:04 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:23:04 localhost nova_compute[295778]: 2025-10-14 10:23:04.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 71 KiB/s wr, 37 op/s Oct 14 06:23:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:23:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:23:05 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:05 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:23:06 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:23:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:06 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:06 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:06 localhost nova_compute[295778]: 2025-10-14 10:23:06.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 49 KiB/s wr, 33 op/s Oct 14 06:23:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:23:09 Oct 14 06:23:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:23:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:23:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['vms', 'volumes', 'manila_metadata', '.mgr', 'images', 'manila_data', 'backups'] Oct 14 06:23:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 49 KiB/s wr, 33 op/s Oct 14 06:23:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:23:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:09 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:23:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:23:09 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:23:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0011592249441652709 of space, bias 4.0, pg target 0.9227430555555557 quantized to 16 (current 16) Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:23:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:23:09 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:09 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:23:09 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:09 localhost nova_compute[295778]: 2025-10-14 10:23:09.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:11 localhost nova_compute[295778]: 2025-10-14 10:23:11.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3aa21700-6e75-4c49-a269-77bd18f63d8d", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3aa21700-6e75-4c49-a269-77bd18f63d8d/.meta.tmp' Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3aa21700-6e75-4c49-a269-77bd18f63d8d/.meta.tmp' to config b'/volumes/_nogroup/3aa21700-6e75-4c49-a269-77bd18f63d8d/.meta' Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3aa21700-6e75-4c49-a269-77bd18f63d8d", "format": "json"}]: dispatch Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:11 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 80 KiB/s wr, 36 op/s Oct 14 06:23:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:23:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:23:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice_bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:23:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:23:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:12 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:23:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 43 KiB/s wr, 5 op/s Oct 14 06:23:13 localhost podman[345229]: 2025-10-14 10:23:13.552542664 +0000 UTC m=+0.084418765 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:23:13 localhost podman[345229]: 2025-10-14 10:23:13.562507278 +0000 UTC m=+0.094383389 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:23:13 localhost systemd[1]: tmp-crun.Hj34uI.mount: Deactivated successfully. Oct 14 06:23:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:13 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:13 localhost podman[345227]: 2025-10-14 10:23:13.607075668 +0000 UTC m=+0.146617552 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public) Oct 14 06:23:13 localhost podman[345227]: 2025-10-14 10:23:13.64909369 +0000 UTC m=+0.188635614 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container) Oct 14 06:23:13 localhost podman[345228]: 2025-10-14 10:23:13.652325125 +0000 UTC m=+0.187752620 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:23:13 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:23:13 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:23:13 localhost podman[345228]: 2025-10-14 10:23:13.727342461 +0000 UTC m=+0.262769956 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:23:13 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:23:14 localhost ovn_controller[156286]: 2025-10-14T10:23:14Z|00428|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Oct 14 06:23:14 localhost nova_compute[295778]: 2025-10-14 10:23:14.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8/.meta.tmp' Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8/.meta.tmp' to config b'/volumes/_nogroup/75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8/.meta' Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8", "format": "json"}]: dispatch Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:14 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:14 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.171059) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395171134, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1481, "num_deletes": 265, "total_data_size": 1131939, "memory_usage": 1165280, "flush_reason": "Manual Compaction"} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395182246, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1105428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37255, "largest_seqno": 38735, "table_properties": {"data_size": 1098930, "index_size": 3456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16203, "raw_average_key_size": 21, "raw_value_size": 1084881, "raw_average_value_size": 1408, "num_data_blocks": 151, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437325, "oldest_key_time": 1760437325, "file_creation_time": 1760437395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 11230 microseconds, and 4334 cpu microseconds. Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.182296) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1105428 bytes OK Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.182319) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.184559) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.184586) EVENT_LOG_v1 {"time_micros": 1760437395184577, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.184610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1124883, prev total WAL file size 1125373, number of live WAL files 2. Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.185352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353136' seq:72057594037927935, type:22 .. '6C6F676D0034373731' seq:0, type:0; will stop at (end) Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1079KB)], [69(15MB)] Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395185395, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 17527842, "oldest_snapshot_seqno": -1} Oct 14 06:23:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "63982f0d-ef57-43a5-8889-f331afc68cb5", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14170 keys, 17267472 bytes, temperature: kUnknown Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395298237, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 17267472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17186949, "index_size": 44006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 382279, "raw_average_key_size": 26, "raw_value_size": 16946432, "raw_average_value_size": 1195, "num_data_blocks": 1625, "num_entries": 14170, "num_filter_entries": 14170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.299031) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 17267472 bytes Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.302051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.7 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 15.7 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(31.5) write-amplify(15.6) OK, records in: 14722, records dropped: 552 output_compression: NoCompression Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.302081) EVENT_LOG_v1 {"time_micros": 1760437395302068, "job": 42, "event": "compaction_finished", "compaction_time_micros": 113304, "compaction_time_cpu_micros": 50633, "output_level": 6, "num_output_files": 1, "total_output_size": 17267472, "num_input_records": 14722, "num_output_records": 14170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395302612, "job": 42, "event": "table_file_deletion", "file_number": 71} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437395305310, "job": 42, "event": "table_file_deletion", "file_number": 69} Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.185254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.305432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.305440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.305443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.305446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:23:15.305449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:23:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 74 KiB/s wr, 8 op/s Oct 14 06:23:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:23:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 14 06:23:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 14 06:23:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:23:15 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:23:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 14 06:23:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:16 localhost nova_compute[295778]: 2025-10-14 10:23:16.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 14 06:23:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 14 06:23:16 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 14 06:23:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 62 KiB/s wr, 6 op/s Oct 14 06:23:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3aa21700-6e75-4c49-a269-77bd18f63d8d", "format": "json"}]: dispatch Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:18 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:18.279+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3aa21700-6e75-4c49-a269-77bd18f63d8d' of type subvolume Oct 14 06:23:18 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3aa21700-6e75-4c49-a269-77bd18f63d8d' of type subvolume Oct 14 06:23:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3aa21700-6e75-4c49-a269-77bd18f63d8d", "force": true, "format": "json"}]: dispatch Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3aa21700-6e75-4c49-a269-77bd18f63d8d'' moved to trashcan Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3aa21700-6e75-4c49-a269-77bd18f63d8d, vol_name:cephfs) < "" Oct 14 06:23:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:23:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:23:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:19 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:23:19 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:23:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:19 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 62 KiB/s wr, 6 op/s Oct 14 06:23:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:19 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:19 localhost nova_compute[295778]: 2025-10-14 10:23:19.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:21 localhost nova_compute[295778]: 2025-10-14 10:23:21.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8", "format": "json"}]: dispatch Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 112 KiB/s wr, 11 op/s Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:21.469+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8' of type subvolume Oct 14 06:23:21 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8' of type subvolume Oct 14 06:23:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8", "force": true, "format": "json"}]: dispatch Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8'' moved to trashcan Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75f67ec9-693f-4fb5-a3a6-abc0f5aa5db8, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "63982f0d-ef57-43a5-8889-f331afc68cb5", "snap_name": "0fba70cb-9d21-4be5-ba71-dd3080aa1349", "force": true, "format": "json"}]: dispatch Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, prefix:fs subvolumegroup snapshot rm, snap_name:0fba70cb-9d21-4be5-ba71-dd3080aa1349, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, prefix:fs subvolumegroup snapshot rm, snap_name:0fba70cb-9d21-4be5-ba71-dd3080aa1349, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "63982f0d-ef57-43a5-8889-f331afc68cb5", "force": true, "format": "json"}]: dispatch Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:63982f0d-ef57-43a5-8889-f331afc68cb5, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:23:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:23:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:23:22 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:23:22 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:23:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 81 KiB/s wr, 7 op/s Oct 14 06:23:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 14 06:23:24 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5644 writes, 38K keys, 5644 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 5644 writes, 5644 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2452 writes, 11K keys, 2452 commit groups, 1.0 writes per commit group, ingest: 10.52 MB, 0.02 MB/s#012Interval WAL: 2452 writes, 2452 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 135.0 0.35 0.13 21 0.017 0 0 0.0 0.0#012 L6 1/0 16.47 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.9 179.6 164.0 1.98 0.95 20 0.099 254K 10K 0.0 0.0#012 Sum 1/0 16.47 MB 0.0 0.3 0.0 0.3 0.4 0.1 0.0 7.9 152.5 159.6 2.33 1.07 41 0.057 254K 10K 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 13.0 149.2 149.4 0.93 0.43 16 0.058 110K 4294 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 179.6 164.0 1.98 0.95 20 0.099 254K 10K 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 136.6 0.35 0.13 20 0.017 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.00 0.00 1 0.004 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.31 MB/s write, 0.35 GB read, 0.30 MB/s read, 2.3 seconds#012Interval compaction: 0.14 GB write, 0.23 MB/s write, 0.14 GB read, 0.23 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d4a76f350#2 capacity: 304.00 MB usage: 32.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000169 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1721,30.45 MB,10.0175%) FilterBlock(41,731.61 KB,0.23502%) IndexBlock(41,935.36 KB,0.300473%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 14 06:23:24 localhost nova_compute[295778]: 2025-10-14 10:23:24.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a71f1ac8-e632-4724-85bc-bac6b2948d1c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a71f1ac8-e632-4724-85bc-bac6b2948d1c/.meta.tmp' Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a71f1ac8-e632-4724-85bc-bac6b2948d1c/.meta.tmp' to config b'/volumes/_nogroup/a71f1ac8-e632-4724-85bc-bac6b2948d1c/.meta' Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a71f1ac8-e632-4724-85bc-bac6b2948d1c", "format": "json"}]: dispatch Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:25 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 110 KiB/s wr, 10 op/s Oct 14 06:23:25 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "r", "format": "json"}]: dispatch Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:23:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:25 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID alice bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:23:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:23:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:25 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:25 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:26 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "eace25de-0f4c-4af0-aa72-4cf29239e76f", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:26 localhost nova_compute[295778]: 2025-10-14 10:23:26.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:23:26 localhost podman[345298]: 2025-10-14 10:23:26.530835921 +0000 UTC m=+0.074189014 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:23:26 localhost podman[345298]: 2025-10-14 10:23:26.541581845 +0000 UTC m=+0.084934968 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:23:26 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:23:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:26 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow r pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 79 KiB/s wr, 7 op/s Oct 14 06:23:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98689522-5548-41a2-9c7c-0b808a626fcd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "format": "json"}]: dispatch Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/84f18cc0-5bd6-4afc-8ce2-83824802de84/98689522-5548-41a2-9c7c-0b808a626fcd/.meta.tmp' Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/84f18cc0-5bd6-4afc-8ce2-83824802de84/98689522-5548-41a2-9c7c-0b808a626fcd/.meta.tmp' to config b'/volumes/84f18cc0-5bd6-4afc-8ce2-83824802de84/98689522-5548-41a2-9c7c-0b808a626fcd/.meta' Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98689522-5548-41a2-9c7c-0b808a626fcd", "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "format": "json"}]: dispatch Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolume getpath, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolume getpath, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a82b025-c015-4b0a-816f-f6ad9b43cf92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a82b025-c015-4b0a-816f-f6ad9b43cf92/.meta.tmp' Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a82b025-c015-4b0a-816f-f6ad9b43cf92/.meta.tmp' to config b'/volumes/_nogroup/7a82b025-c015-4b0a-816f-f6ad9b43cf92/.meta' Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a82b025-c015-4b0a-816f-f6ad9b43cf92", "format": "json"}]: dispatch Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:28 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:28 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.928 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.929 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.929 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.929 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:23:28 localhost nova_compute[295778]: 2025-10-14 10:23:28.930 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:23:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 14 06:23:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 14 06:23:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:23:29 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:29 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:23:29 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1001641359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.414 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:23:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 79 KiB/s wr, 7 op/s Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.619 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.621 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11338MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.621 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.622 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.692 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.693 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.725 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:23:29 localhost nova_compute[295778]: 2025-10-14 10:23:29.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 14 06:23:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 14 06:23:29 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 14 06:23:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:23:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/965296649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:23:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:30 localhost nova_compute[295778]: 2025-10-14 10:23:30.172 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:23:30 localhost nova_compute[295778]: 2025-10-14 10:23:30.179 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:23:30 localhost nova_compute[295778]: 2025-10-14 10:23:30.199 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:23:30 localhost nova_compute[295778]: 2025-10-14 10:23:30.202 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:23:30 localhost nova_compute[295778]: 2025-10-14 10:23:30.202 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:23:30 localhost podman[246584]: time="2025-10-14T10:23:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:23:30 localhost podman[246584]: @ - - [14/Oct/2025:10:23:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:23:30 localhost podman[246584]: @ - - [14/Oct/2025:10:23:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18905 "" "Go-http-client/1.1" Oct 14 06:23:31 localhost nova_compute[295778]: 2025-10-14 10:23:31.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98689522-5548-41a2-9c7c-0b808a626fcd", "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "format": "json"}]: dispatch Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98689522-5548-41a2-9c7c-0b808a626fcd, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98689522-5548-41a2-9c7c-0b808a626fcd, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:31 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98689522-5548-41a2-9c7c-0b808a626fcd' of type subvolume Oct 14 06:23:31 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:31.446+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98689522-5548-41a2-9c7c-0b808a626fcd' of type subvolume Oct 14 06:23:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98689522-5548-41a2-9c7c-0b808a626fcd", "force": true, "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "format": "json"}]: dispatch Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolume rm, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/84f18cc0-5bd6-4afc-8ce2-83824802de84/98689522-5548-41a2-9c7c-0b808a626fcd'' moved to trashcan Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolume rm, sub_name:98689522-5548-41a2-9c7c-0b808a626fcd, vol_name:cephfs) < "" Oct 14 06:23:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 131 KiB/s wr, 12 op/s Oct 14 06:23:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 14 06:23:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: Creating meta for ID bob with tenant 40ca4558a36f42aeba3e8c219141b2fc Oct 14 06:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:23:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} v 0) Oct 14 06:23:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:32 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:32 localhost podman[345363]: 2025-10-14 10:23:32.546231406 +0000 UTC m=+0.074321917 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 14 06:23:32 localhost podman[345363]: 2025-10-14 10:23:32.559114538 +0000 UTC m=+0.087205059 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66d13c30-5830-46cb-8d8c-7ebcecc8e4d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:32 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:23:32 localhost podman[345362]: 2025-10-14 10:23:32.612708606 +0000 UTC m=+0.142734528 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66d13c30-5830-46cb-8d8c-7ebcecc8e4d6/.meta.tmp' Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66d13c30-5830-46cb-8d8c-7ebcecc8e4d6/.meta.tmp' to config b'/volumes/_nogroup/66d13c30-5830-46cb-8d8c-7ebcecc8e4d6/.meta' Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66d13c30-5830-46cb-8d8c-7ebcecc8e4d6", "format": "json"}]: dispatch Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:32 localhost podman[345362]: 2025-10-14 10:23:32.644778115 +0000 UTC m=+0.174803987 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:32 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:23:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "eace25de-0f4c-4af0-aa72-4cf29239e76f", "snap_name": "3e806467-d127-4f61-9497-6724b82c5bc9", "force": true, "format": "json"}]: dispatch Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, prefix:fs subvolumegroup snapshot rm, snap_name:3e806467-d127-4f61-9497-6724b82c5bc9, vol_name:cephfs) < "" Oct 14 06:23:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, prefix:fs subvolumegroup snapshot rm, snap_name:3e806467-d127-4f61-9497-6724b82c5bc9, vol_name:cephfs) < "" Oct 14 06:23:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"} : dispatch Oct 14 06:23:33 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7", "mon", "allow r"], "format": "json"}]': finished Oct 14 06:23:33 localhost openstack_network_exporter[248748]: ERROR 10:23:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:23:33 localhost openstack_network_exporter[248748]: ERROR 10:23:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:23:33 localhost openstack_network_exporter[248748]: ERROR 10:23:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:23:33 localhost openstack_network_exporter[248748]: ERROR 10:23:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:23:33 localhost openstack_network_exporter[248748]: Oct 14 06:23:33 localhost openstack_network_exporter[248748]: ERROR 10:23:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:23:33 localhost openstack_network_exporter[248748]: Oct 14 06:23:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 81 KiB/s wr, 8 op/s Oct 14 06:23:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "84f18cc0-5bd6-4afc-8ce2-83824802de84", "force": true, "format": "json"}]: dispatch Oct 14 06:23:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:84f18cc0-5bd6-4afc-8ce2-83824802de84, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:34 localhost nova_compute[295778]: 2025-10-14 10:23:34.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:35 localhost nova_compute[295778]: 2025-10-14 10:23:35.204 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:23:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 113 KiB/s wr, 11 op/s Oct 14 06:23:35 localhost podman[345404]: 2025-10-14 10:23:35.527153957 +0000 UTC m=+0.065718621 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:23:35 localhost podman[345404]: 2025-10-14 10:23:35.543058277 +0000 UTC m=+0.081622941 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 14 06:23:35 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:23:35 localhost podman[345403]: 2025-10-14 10:23:35.636809239 +0000 UTC m=+0.178445704 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS) Oct 14 06:23:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "eace25de-0f4c-4af0-aa72-4cf29239e76f", "force": true, "format": "json"}]: dispatch Oct 14 06:23:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:eace25de-0f4c-4af0-aa72-4cf29239e76f, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:35 localhost podman[345403]: 2025-10-14 10:23:35.649382461 +0000 UTC m=+0.191018906 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Oct 14 06:23:35 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:23:35 localhost nova_compute[295778]: 2025-10-14 10:23:35.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:35 localhost nova_compute[295778]: 2025-10-14 10:23:35.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:23:35 localhost nova_compute[295778]: 2025-10-14 10:23:35.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:23:35 localhost nova_compute[295778]: 2025-10-14 10:23:35.967 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:23:36 localhost nova_compute[295778]: 2025-10-14 10:23:36.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29378319-92f4-4d6d-bf55-29194a90beb1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/29378319-92f4-4d6d-bf55-29194a90beb1/.meta.tmp' Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/29378319-92f4-4d6d-bf55-29194a90beb1/.meta.tmp' to config b'/volumes/_nogroup/29378319-92f4-4d6d-bf55-29194a90beb1/.meta' Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:36 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29378319-92f4-4d6d-bf55-29194a90beb1", "format": "json"}]: dispatch Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:36 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:36 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:36 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:36 localhost nova_compute[295778]: 2025-10-14 10:23:36.963 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 84 KiB/s wr, 8 op/s Oct 14 06:23:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:23:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:23:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:23:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:23:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:23:37 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:23:37 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev cfd5a96f-42e7-429f-8c49-58301768ee4d (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:23:37 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev cfd5a96f-42e7-429f-8c49-58301768ee4d (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:23:37 localhost ceph-mgr[300442]: [progress INFO root] Completed event cfd5a96f-42e7-429f-8c49-58301768ee4d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:23:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:23:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:23:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/.meta.tmp' Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/.meta.tmp' to config b'/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/.meta' Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "format": "json"}]: dispatch Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:37 localhost nova_compute[295778]: 2025-10-14 10:23:37.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:37 localhost nova_compute[295778]: 2025-10-14 10:23:37.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:23:38 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:23:38 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:23:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1747d2e3-48c9-40ca-8b56-bf66723b085d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "format": "json"}]: dispatch Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/541e5427-642f-4bda-a96d-096dd5de1d51/1747d2e3-48c9-40ca-8b56-bf66723b085d/.meta.tmp' Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/541e5427-642f-4bda-a96d-096dd5de1d51/1747d2e3-48c9-40ca-8b56-bf66723b085d/.meta.tmp' to config b'/volumes/541e5427-642f-4bda-a96d-096dd5de1d51/1747d2e3-48c9-40ca-8b56-bf66723b085d/.meta' Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:38 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1747d2e3-48c9-40ca-8b56-bf66723b085d", "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "format": "json"}]: dispatch Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolume getpath, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:38 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolume getpath, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:38 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:38 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:38 localhost nova_compute[295778]: 2025-10-14 10:23:38.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:23:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:23:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 84 KiB/s wr, 8 op/s Oct 14 06:23:39 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:23:39 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:23:39 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:23:39 localhost nova_compute[295778]: 2025-10-14 10:23:39.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:39 localhost nova_compute[295778]: 2025-10-14 10:23:39.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:40 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:23:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29378319-92f4-4d6d-bf55-29194a90beb1", "format": "json"}]: dispatch Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:29378319-92f4-4d6d-bf55-29194a90beb1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:29378319-92f4-4d6d-bf55-29194a90beb1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:40 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:40.894+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29378319-92f4-4d6d-bf55-29194a90beb1' of type subvolume Oct 14 06:23:40 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '29378319-92f4-4d6d-bf55-29194a90beb1' of type subvolume Oct 14 06:23:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29378319-92f4-4d6d-bf55-29194a90beb1", "force": true, "format": "json"}]: dispatch Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/29378319-92f4-4d6d-bf55-29194a90beb1'' moved to trashcan Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:29378319-92f4-4d6d-bf55-29194a90beb1, vol_name:cephfs) < "" Oct 14 06:23:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "auth_id": "bob", "tenant_id": "40ca4558a36f42aeba3e8c219141b2fc", "access_level": "rw", "format": "json"}]: dispatch Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 14 06:23:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1,allow rw path=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7,allow rw pool=manila_data namespace=fsvolumens_958b7b65-e598-4225-a741-d122ebd39d66"]} v 0) Oct 14 06:23:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1,allow rw path=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7,allow rw pool=manila_data namespace=fsvolumens_958b7b65-e598-4225-a741-d122ebd39d66"]} : dispatch Oct 14 06:23:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1,allow rw path=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7,allow rw pool=manila_data namespace=fsvolumens_958b7b65-e598-4225-a741-d122ebd39d66"]}]': finished Oct 14 06:23:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 14 06:23:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, tenant_id:40ca4558a36f42aeba3e8c219141b2fc, vol_name:cephfs) < "" Oct 14 06:23:41 localhost nova_compute[295778]: 2025-10-14 10:23:41.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 122 KiB/s wr, 11 op/s Oct 14 06:23:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1,allow rw path=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7,allow rw pool=manila_data namespace=fsvolumens_958b7b65-e598-4225-a741-d122ebd39d66"]} : dispatch Oct 14 06:23:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1,allow rw path=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7,allow rw pool=manila_data namespace=fsvolumens_958b7b65-e598-4225-a741-d122ebd39d66"]}]': finished Oct 14 06:23:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1747d2e3-48c9-40ca-8b56-bf66723b085d", "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "format": "json"}]: dispatch Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:41 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:41.707+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1747d2e3-48c9-40ca-8b56-bf66723b085d' of type subvolume Oct 14 06:23:41 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1747d2e3-48c9-40ca-8b56-bf66723b085d' of type subvolume Oct 14 06:23:41 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1747d2e3-48c9-40ca-8b56-bf66723b085d", "force": true, "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "format": "json"}]: dispatch Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolume rm, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/541e5427-642f-4bda-a96d-096dd5de1d51/1747d2e3-48c9-40ca-8b56-bf66723b085d'' moved to trashcan Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:41 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolume rm, sub_name:1747d2e3-48c9-40ca-8b56-bf66723b085d, vol_name:cephfs) < "" Oct 14 06:23:41 localhost nova_compute[295778]: 2025-10-14 10:23:41.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:42 localhost nova_compute[295778]: 2025-10-14 10:23:42.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:23:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 70 KiB/s wr, 6 op/s Oct 14 06:23:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66d13c30-5830-46cb-8d8c-7ebcecc8e4d6", "format": "json"}]: dispatch Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:44.141+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66d13c30-5830-46cb-8d8c-7ebcecc8e4d6' of type subvolume Oct 14 06:23:44 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66d13c30-5830-46cb-8d8c-7ebcecc8e4d6' of type subvolume Oct 14 06:23:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66d13c30-5830-46cb-8d8c-7ebcecc8e4d6", "force": true, "format": "json"}]: dispatch Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66d13c30-5830-46cb-8d8c-7ebcecc8e4d6'' moved to trashcan Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66d13c30-5830-46cb-8d8c-7ebcecc8e4d6, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "auth_id": "bob", "format": "json"}]: dispatch Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 14 06:23:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7"]} v 0) Oct 14 06:23:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7"]} : dispatch Oct 14 06:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:23:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7"]}]': finished Oct 14 06:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "auth_id": "bob", "format": "json"}]: dispatch Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66/c7800ee6-8cf6-4dc0-8d0e-2ed45939ec7d Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:44 localhost systemd[1]: tmp-crun.S6fSaA.mount: Deactivated successfully. Oct 14 06:23:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7"]} : dispatch Oct 14 06:23:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_6e43484d-3647-4085-8cab-db9b4f4530f7"]}]': finished Oct 14 06:23:44 localhost podman[345529]: 2025-10-14 10:23:44.572237642 +0000 UTC m=+0.100551472 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:23:44 localhost podman[345530]: 2025-10-14 10:23:44.637541201 +0000 UTC m=+0.163973791 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:23:44 localhost podman[345530]: 2025-10-14 10:23:44.64699156 +0000 UTC m=+0.173424160 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:23:44 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:23:44 localhost podman[345529]: 2025-10-14 10:23:44.674150879 +0000 UTC m=+0.202464639 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:23:44 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:23:44 localhost podman[345528]: 2025-10-14 10:23:44.555335054 +0000 UTC m=+0.089966621 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6) Oct 14 06:23:44 localhost nova_compute[295778]: 2025-10-14 10:23:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:44 localhost podman[345528]: 2025-10-14 10:23:44.763198355 +0000 UTC m=+0.297829912 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6) Oct 14 06:23:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "541e5427-642f-4bda-a96d-096dd5de1d51", "force": true, "format": "json"}]: dispatch Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:541e5427-642f-4bda-a96d-096dd5de1d51, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:44 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:23:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 119 KiB/s wr, 11 op/s Oct 14 06:23:46 localhost nova_compute[295778]: 2025-10-14 10:23:46.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a82b025-c015-4b0a-816f-f6ad9b43cf92", "format": "json"}]: dispatch Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a82b025-c015-4b0a-816f-f6ad9b43cf92' of type subvolume Oct 14 06:23:47 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:47.331+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a82b025-c015-4b0a-816f-f6ad9b43cf92' of type subvolume Oct 14 06:23:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a82b025-c015-4b0a-816f-f6ad9b43cf92", "force": true, "format": "json"}]: dispatch Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a82b025-c015-4b0a-816f-f6ad9b43cf92'' moved to trashcan Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a82b025-c015-4b0a-816f-f6ad9b43cf92, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 87 KiB/s wr, 8 op/s Oct 14 06:23:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "bob", "format": "json"}]: dispatch Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 14 06:23:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:47 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Oct 14 06:23:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 14 06:23:47 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "auth_id": "bob", "format": "json"}]: dispatch Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7/09344f24-aa16-4ae2-bf7b-5dc05ab244e1 Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 14 06:23:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:23:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:23:48 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 14 06:23:48 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 14 06:23:48 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Oct 14 06:23:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 87 KiB/s wr, 8 op/s Oct 14 06:23:49 localhost nova_compute[295778]: 2025-10-14 10:23:49.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a71f1ac8-e632-4724-85bc-bac6b2948d1c", "format": "json"}]: dispatch Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:50.647+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a71f1ac8-e632-4724-85bc-bac6b2948d1c' of type subvolume Oct 14 06:23:50 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a71f1ac8-e632-4724-85bc-bac6b2948d1c' of type subvolume Oct 14 06:23:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a71f1ac8-e632-4724-85bc-bac6b2948d1c", "force": true, "format": "json"}]: dispatch Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a71f1ac8-e632-4724-85bc-bac6b2948d1c'' moved to trashcan Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a71f1ac8-e632-4724-85bc-bac6b2948d1c, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "176bc390-faa3-4298-ba20-64030a305faa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "format": "json"}]: dispatch Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/a4caf8a0-7c4c-49b5-8392-f0f2ca04f519/176bc390-faa3-4298-ba20-64030a305faa/.meta.tmp' Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/a4caf8a0-7c4c-49b5-8392-f0f2ca04f519/176bc390-faa3-4298-ba20-64030a305faa/.meta.tmp' to config b'/volumes/a4caf8a0-7c4c-49b5-8392-f0f2ca04f519/176bc390-faa3-4298-ba20-64030a305faa/.meta' Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:51 localhost nova_compute[295778]: 2025-10-14 10:23:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "176bc390-faa3-4298-ba20-64030a305faa", "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "format": "json"}]: dispatch Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolume getpath, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolume getpath, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:23:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:23:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 123 KiB/s wr, 11 op/s Oct 14 06:23:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "958b7b65-e598-4225-a741-d122ebd39d66", "format": "json"}]: dispatch Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:958b7b65-e598-4225-a741-d122ebd39d66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:958b7b65-e598-4225-a741-d122ebd39d66, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:51.749+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '958b7b65-e598-4225-a741-d122ebd39d66' of type subvolume Oct 14 06:23:51 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '958b7b65-e598-4225-a741-d122ebd39d66' of type subvolume Oct 14 06:23:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "958b7b65-e598-4225-a741-d122ebd39d66", "force": true, "format": "json"}]: dispatch Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/958b7b65-e598-4225-a741-d122ebd39d66'' moved to trashcan Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:958b7b65-e598-4225-a741-d122ebd39d66, vol_name:cephfs) < "" Oct 14 06:23:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 85 KiB/s wr, 8 op/s Oct 14 06:23:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:54.133 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:23:54 localhost nova_compute[295778]: 2025-10-14 10:23:54.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:54 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:54.135 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:23:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "176bc390-faa3-4298-ba20-64030a305faa", "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "format": "json"}]: dispatch Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:176bc390-faa3-4298-ba20-64030a305faa, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:176bc390-faa3-4298-ba20-64030a305faa, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:54.623+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '176bc390-faa3-4298-ba20-64030a305faa' of type subvolume Oct 14 06:23:54 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '176bc390-faa3-4298-ba20-64030a305faa' of type subvolume Oct 14 06:23:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "176bc390-faa3-4298-ba20-64030a305faa", "force": true, "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "format": "json"}]: dispatch Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolume rm, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/a4caf8a0-7c4c-49b5-8392-f0f2ca04f519/176bc390-faa3-4298-ba20-64030a305faa'' moved to trashcan Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolume rm, sub_name:176bc390-faa3-4298-ba20-64030a305faa, vol_name:cephfs) < "" Oct 14 06:23:54 localhost nova_compute[295778]: 2025-10-14 10:23:54.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "format": "json"}]: dispatch Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e43484d-3647-4085-8cab-db9b4f4530f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e43484d-3647-4085-8cab-db9b4f4530f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:23:54.931+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e43484d-3647-4085-8cab-db9b4f4530f7' of type subvolume Oct 14 06:23:54 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e43484d-3647-4085-8cab-db9b4f4530f7' of type subvolume Oct 14 06:23:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e43484d-3647-4085-8cab-db9b4f4530f7", "force": true, "format": "json"}]: dispatch Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6e43484d-3647-4085-8cab-db9b4f4530f7'' moved to trashcan Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:23:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e43484d-3647-4085-8cab-db9b4f4530f7, vol_name:cephfs) < "" Oct 14 06:23:55 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:55.137 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:23:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:23:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 119 KiB/s wr, 11 op/s Oct 14 06:23:56 localhost nova_compute[295778]: 2025-10-14 10:23:56.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:23:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:23:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 70 KiB/s wr, 6 op/s Oct 14 06:23:57 localhost podman[345595]: 2025-10-14 10:23:57.578444437 +0000 UTC m=+0.115910118 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 14 06:23:57 localhost podman[345595]: 2025-10-14 10:23:57.594143823 +0000 UTC m=+0.131609544 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, io.buildah.version=1.41.3) Oct 14 06:23:57 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:23:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:57.648 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:23:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:57.648 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:23:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:23:57.648 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:23:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a4caf8a0-7c4c-49b5-8392-f0f2ca04f519", "force": true, "format": "json"}]: dispatch Oct 14 06:23:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a4caf8a0-7c4c-49b5-8392-f0f2ca04f519, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:23:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 70 KiB/s wr, 6 op/s Oct 14 06:23:59 localhost nova_compute[295778]: 2025-10-14 10:23:59.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:00 localhost podman[246584]: time="2025-10-14T10:24:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:24:00 localhost podman[246584]: @ - - [14/Oct/2025:10:24:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:24:00 localhost podman[246584]: @ - - [14/Oct/2025:10:24:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18911 "" "Go-http-client/1.1" Oct 14 06:24:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:24:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:24:01 localhost nova_compute[295778]: 2025-10-14 10:24:01.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 79 KiB/s wr, 8 op/s Oct 14 06:24:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta' Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "format": "json"}]: dispatch Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:03 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:03 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:03 localhost openstack_network_exporter[248748]: ERROR 10:24:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:24:03 localhost openstack_network_exporter[248748]: ERROR 10:24:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:24:03 localhost openstack_network_exporter[248748]: ERROR 10:24:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:24:03 localhost openstack_network_exporter[248748]: ERROR 10:24:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:24:03 localhost openstack_network_exporter[248748]: Oct 14 06:24:03 localhost openstack_network_exporter[248748]: ERROR 10:24:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:24:03 localhost openstack_network_exporter[248748]: Oct 14 06:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:24:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 44 KiB/s wr, 4 op/s Oct 14 06:24:03 localhost podman[345615]: 2025-10-14 10:24:03.541258223 +0000 UTC m=+0.081435906 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009) Oct 14 06:24:03 localhost podman[345615]: 2025-10-14 10:24:03.577190263 +0000 UTC m=+0.117367966 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:24:03 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:24:03 localhost podman[345616]: 2025-10-14 10:24:03.593676559 +0000 UTC m=+0.131372528 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:24:03 localhost podman[345616]: 2025-10-14 10:24:03.601977559 +0000 UTC m=+0.139673558 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:24:03 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:24:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "184896e0-3b36-4756-b941-aa8535158ea2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "format": "json"}]: dispatch Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/02c8108e-bb72-4773-b53f-98904ab69102/184896e0-3b36-4756-b941-aa8535158ea2/.meta.tmp' Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/02c8108e-bb72-4773-b53f-98904ab69102/184896e0-3b36-4756-b941-aa8535158ea2/.meta.tmp' to config b'/volumes/02c8108e-bb72-4773-b53f-98904ab69102/184896e0-3b36-4756-b941-aa8535158ea2/.meta' Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "184896e0-3b36-4756-b941-aa8535158ea2", "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "format": "json"}]: dispatch Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolume getpath, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolume getpath, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:04 localhost nova_compute[295778]: 2025-10-14 10:24:04.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 62 KiB/s wr, 6 op/s Oct 14 06:24:06 localhost nova_compute[295778]: 2025-10-14 10:24:06.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "snap_name": "e2f013d7-f0c7-41f9-991d-7460bef9aaf4", "format": "json"}]: dispatch Oct 14 06:24:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:24:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:06 localhost podman[345655]: 2025-10-14 10:24:06.562687323 +0000 UTC m=+0.087550148 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:24:06 localhost podman[345655]: 2025-10-14 10:24:06.576095818 +0000 UTC m=+0.100958633 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:24:06 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:24:06 localhost systemd[1]: tmp-crun.YtIIo5.mount: Deactivated successfully. Oct 14 06:24:06 localhost podman[345656]: 2025-10-14 10:24:06.670471286 +0000 UTC m=+0.190120783 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:24:06 localhost podman[345656]: 2025-10-14 10:24:06.710219468 +0000 UTC m=+0.229868965 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251009) Oct 14 06:24:06 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:24:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 28 KiB/s wr, 3 op/s Oct 14 06:24:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "184896e0-3b36-4756-b941-aa8535158ea2", "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "format": "json"}]: dispatch Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:184896e0-3b36-4756-b941-aa8535158ea2, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:184896e0-3b36-4756-b941-aa8535158ea2, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:07.691+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '184896e0-3b36-4756-b941-aa8535158ea2' of type subvolume Oct 14 06:24:07 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '184896e0-3b36-4756-b941-aa8535158ea2' of type subvolume Oct 14 06:24:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "184896e0-3b36-4756-b941-aa8535158ea2", "force": true, "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "format": "json"}]: dispatch Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolume rm, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/02c8108e-bb72-4773-b53f-98904ab69102/184896e0-3b36-4756-b941-aa8535158ea2'' moved to trashcan Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolume rm, sub_name:184896e0-3b36-4756-b941-aa8535158ea2, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta' Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "format": "json"}]: dispatch Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:07 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:07 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:24:09 Oct 14 06:24:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:24:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:24:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'manila_data', 'backups', 'manila_metadata', 'images'] Oct 14 06:24:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 28 KiB/s wr, 3 op/s Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:24:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0014801110587660525 of space, bias 4.0, pg target 1.178168402777778 quantized to 16 (current 16) Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:24:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "snap_name": "e2f013d7-f0c7-41f9-991d-7460bef9aaf4_5af70cf5-23ce-48f8-86ff-262047eea8f7", "force": true, "format": "json"}]: dispatch Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4_5af70cf5-23ce-48f8-86ff-262047eea8f7, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta' Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4_5af70cf5-23ce-48f8-86ff-262047eea8f7, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:09 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "snap_name": "e2f013d7-f0c7-41f9-991d-7460bef9aaf4", "force": true, "format": "json"}]: dispatch Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta.tmp' to config b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81/.meta' Oct 14 06:24:09 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2f013d7-f0c7-41f9-991d-7460bef9aaf4, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:09 localhost nova_compute[295778]: 2025-10-14 10:24:09.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "02c8108e-bb72-4773-b53f-98904ab69102", "force": true, "format": "json"}]: dispatch Oct 14 06:24:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:24:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:02c8108e-bb72-4773-b53f-98904ab69102, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:24:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "snap_name": "da618c51-1a34-46be-af28-5fe95b8aad54", "format": "json"}]: dispatch Oct 14 06:24:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:11 localhost nova_compute[295778]: 2025-10-14 10:24:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 54 KiB/s wr, 5 op/s Oct 14 06:24:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta' Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "format": "json"}]: dispatch Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:12 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e250 do_prune osdmap full prune enabled Oct 14 06:24:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e251 e251: 6 total, 6 up, 6 in Oct 14 06:24:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e251: 6 total, 6 up, 6 in Oct 14 06:24:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "format": "json"}]: dispatch Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f5a956d7-349a-49fb-9d79-28f77e864b81, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f5a956d7-349a-49fb-9d79-28f77e864b81, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:12.917+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f5a956d7-349a-49fb-9d79-28f77e864b81' of type subvolume Oct 14 06:24:12 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f5a956d7-349a-49fb-9d79-28f77e864b81' of type subvolume Oct 14 06:24:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f5a956d7-349a-49fb-9d79-28f77e864b81", "force": true, "format": "json"}]: dispatch Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f5a956d7-349a-49fb-9d79-28f77e864b81'' moved to trashcan Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f5a956d7-349a-49fb-9d79-28f77e864b81, vol_name:cephfs) < "" Oct 14 06:24:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 54 KiB/s wr, 4 op/s Oct 14 06:24:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "85310486-f3d9-459b-8ac7-bea6f4a8069a", "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "snap_name": "da618c51-1a34-46be-af28-5fe95b8aad54_5284ddee-0564-4ecc-bf30-fd83c3730497", "force": true, "format": "json"}]: dispatch Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54_5284ddee-0564-4ecc-bf30-fd83c3730497, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta' Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54_5284ddee-0564-4ecc-bf30-fd83c3730497, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "snap_name": "da618c51-1a34-46be-af28-5fe95b8aad54", "force": true, "format": "json"}]: dispatch Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta.tmp' to config b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33/.meta' Oct 14 06:24:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da618c51-1a34-46be-af28-5fe95b8aad54, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.817106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454817165, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1170, "num_deletes": 251, "total_data_size": 864943, "memory_usage": 886216, "flush_reason": "Manual Compaction"} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454825471, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 845507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38736, "largest_seqno": 39905, "table_properties": {"data_size": 840126, "index_size": 2660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13930, "raw_average_key_size": 21, "raw_value_size": 828523, "raw_average_value_size": 1270, "num_data_blocks": 116, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437395, "oldest_key_time": 1760437395, "file_creation_time": 1760437454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8402 microseconds, and 3694 cpu microseconds. Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.825516) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 845507 bytes OK Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.825538) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.828555) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.828580) EVENT_LOG_v1 {"time_micros": 1760437454828572, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.828605) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 859226, prev total WAL file size 859226, number of live WAL files 2. Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.829260) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(825KB)], [72(16MB)] Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454829342, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 18112979, "oldest_snapshot_seqno": -1} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14295 keys, 16892372 bytes, temperature: kUnknown Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454933516, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 16892372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16811896, "index_size": 43639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35781, "raw_key_size": 385991, "raw_average_key_size": 27, "raw_value_size": 16570022, "raw_average_value_size": 1159, "num_data_blocks": 1604, "num_entries": 14295, "num_filter_entries": 14295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.933993) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 16892372 bytes Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.936256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.5 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.5 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(41.4) write-amplify(20.0) OK, records in: 14822, records dropped: 527 output_compression: NoCompression Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.936289) EVENT_LOG_v1 {"time_micros": 1760437454936276, "job": 44, "event": "compaction_finished", "compaction_time_micros": 104398, "compaction_time_cpu_micros": 53563, "output_level": 6, "num_output_files": 1, "total_output_size": 16892372, "num_input_records": 14822, "num_output_records": 14295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454936705, "job": 44, "event": "table_file_deletion", "file_number": 74} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437454939801, "job": 44, "event": "table_file_deletion", "file_number": 72} Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.829113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.939874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.939880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.939882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.939885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:24:14.939888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:24:14 localhost nova_compute[295778]: 2025-10-14 10:24:14.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:24:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 78 KiB/s wr, 6 op/s Oct 14 06:24:15 localhost podman[345692]: 2025-10-14 10:24:15.559238525 +0000 UTC m=+0.088637377 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 06:24:15 localhost podman[345694]: 2025-10-14 10:24:15.540497428 +0000 UTC m=+0.073930337 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:24:15 localhost podman[345692]: 2025-10-14 10:24:15.597610359 +0000 UTC m=+0.127009281 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, name=ubi9-minimal, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Oct 14 06:24:15 localhost podman[345693]: 2025-10-14 10:24:15.608872857 +0000 UTC m=+0.142055069 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:24:15 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:24:15 localhost podman[345694]: 2025-10-14 10:24:15.622498228 +0000 UTC m=+0.155931127 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:24:15 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:24:15 localhost podman[345693]: 2025-10-14 10:24:15.651264059 +0000 UTC m=+0.184446301 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:24:15 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:24:15 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "snap_name": "d5510f0c-88a9-435a-bcf1-2ad193b59892", "format": "json"}]: dispatch Oct 14 06:24:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:15 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f141a269-fdbc-4f9f-9e01-80dff73e344f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f141a269-fdbc-4f9f-9e01-80dff73e344f/.meta.tmp' Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f141a269-fdbc-4f9f-9e01-80dff73e344f/.meta.tmp' to config b'/volumes/_nogroup/f141a269-fdbc-4f9f-9e01-80dff73e344f/.meta' Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f141a269-fdbc-4f9f-9e01-80dff73e344f", "format": "json"}]: dispatch Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:16 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:16 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:16 localhost nova_compute[295778]: 2025-10-14 10:24:16.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e251 do_prune osdmap full prune enabled Oct 14 06:24:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e252 e252: 6 total, 6 up, 6 in Oct 14 06:24:17 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e252: 6 total, 6 up, 6 in Oct 14 06:24:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 98 KiB/s wr, 8 op/s Oct 14 06:24:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15606002-e082-41d1-be35-8d4e0971df33", "format": "json"}]: dispatch Oct 14 06:24:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:15606002-e082-41d1-be35-8d4e0971df33, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:15606002-e082-41d1-be35-8d4e0971df33, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:17 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15606002-e082-41d1-be35-8d4e0971df33' of type subvolume Oct 14 06:24:17 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:17.992+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15606002-e082-41d1-be35-8d4e0971df33' of type subvolume Oct 14 06:24:17 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15606002-e082-41d1-be35-8d4e0971df33", "force": true, "format": "json"}]: dispatch Oct 14 06:24:17 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/15606002-e082-41d1-be35-8d4e0971df33'' moved to trashcan Oct 14 06:24:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15606002-e082-41d1-be35-8d4e0971df33, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 57 KiB/s wr, 5 op/s Oct 14 06:24:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f141a269-fdbc-4f9f-9e01-80dff73e344f", "format": "json"}]: dispatch Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:19.641+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f141a269-fdbc-4f9f-9e01-80dff73e344f' of type subvolume Oct 14 06:24:19 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f141a269-fdbc-4f9f-9e01-80dff73e344f' of type subvolume Oct 14 06:24:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f141a269-fdbc-4f9f-9e01-80dff73e344f", "force": true, "format": "json"}]: dispatch Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f141a269-fdbc-4f9f-9e01-80dff73e344f'' moved to trashcan Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f141a269-fdbc-4f9f-9e01-80dff73e344f, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "snap_name": "d5510f0c-88a9-435a-bcf1-2ad193b59892_95f21989-b040-4143-bde2-e199432ea841", "force": true, "format": "json"}]: dispatch Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892_95f21989-b040-4143-bde2-e199432ea841, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta' Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892_95f21989-b040-4143-bde2-e199432ea841, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "snap_name": "d5510f0c-88a9-435a-bcf1-2ad193b59892", "force": true, "format": "json"}]: dispatch Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta.tmp' to config b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb/.meta' Oct 14 06:24:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5510f0c-88a9-435a-bcf1-2ad193b59892, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:20 localhost nova_compute[295778]: 2025-10-14 10:24:20.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e252 do_prune osdmap full prune enabled Oct 14 06:24:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e253 e253: 6 total, 6 up, 6 in Oct 14 06:24:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e253: 6 total, 6 up, 6 in Oct 14 06:24:20 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "85310486-f3d9-459b-8ac7-bea6f4a8069a", "snap_name": "b63eb2b0-e6f2-48d8-b7be-bd73be7665c8", "force": true, "format": "json"}]: dispatch Oct 14 06:24:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, prefix:fs subvolumegroup snapshot rm, snap_name:b63eb2b0-e6f2-48d8-b7be-bd73be7665c8, vol_name:cephfs) < "" Oct 14 06:24:20 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, prefix:fs subvolumegroup snapshot rm, snap_name:b63eb2b0-e6f2-48d8-b7be-bd73be7665c8, vol_name:cephfs) < "" Oct 14 06:24:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta' Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:21 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "format": "json"}]: dispatch Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:21 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:21 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:21 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:21 localhost nova_compute[295778]: 2025-10-14 10:24:21.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 120 KiB/s wr, 10 op/s Oct 14 06:24:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e253 do_prune osdmap full prune enabled Oct 14 06:24:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e254 e254: 6 total, 6 up, 6 in Oct 14 06:24:22 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e254: 6 total, 6 up, 6 in Oct 14 06:24:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta' Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "format": "json"}]: dispatch Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:22 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:22 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "128d3845-cf67-47da-981e-c473957a0afb", "format": "json"}]: dispatch Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:128d3845-cf67-47da-981e-c473957a0afb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:128d3845-cf67-47da-981e-c473957a0afb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:23.031+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '128d3845-cf67-47da-981e-c473957a0afb' of type subvolume Oct 14 06:24:23 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '128d3845-cf67-47da-981e-c473957a0afb' of type subvolume Oct 14 06:24:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "128d3845-cf67-47da-981e-c473957a0afb", "force": true, "format": "json"}]: dispatch Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/128d3845-cf67-47da-981e-c473957a0afb'' moved to trashcan Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:128d3845-cf67-47da-981e-c473957a0afb, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "85310486-f3d9-459b-8ac7-bea6f4a8069a", "force": true, "format": "json"}]: dispatch Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:85310486-f3d9-459b-8ac7-bea6f4a8069a, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 14 06:24:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 659 B/s rd, 80 KiB/s wr, 6 op/s Oct 14 06:24:24 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "snap_name": "c3ae19e7-2432-46f3-9a1f-47362b927069", "format": "json"}]: dispatch Oct 14 06:24:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:24 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:25 localhost nova_compute[295778]: 2025-10-14 10:24:25.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e254 do_prune osdmap full prune enabled Oct 14 06:24:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e255 e255: 6 total, 6 up, 6 in Oct 14 06:24:25 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e255: 6 total, 6 up, 6 in Oct 14 06:24:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 162 KiB/s wr, 13 op/s Oct 14 06:24:26 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "snap_name": "ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35", "format": "json"}]: dispatch Oct 14 06:24:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:26 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:24:26 localhost nova_compute[295778]: 2025-10-14 10:24:26.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 563 B/s rd, 65 KiB/s wr, 5 op/s Oct 14 06:24:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:24:28 localhost systemd[1]: tmp-crun.qtxbHG.mount: Deactivated successfully. Oct 14 06:24:28 localhost podman[345758]: 2025-10-14 10:24:28.551772129 +0000 UTC m=+0.092264342 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:24:28 localhost podman[345758]: 2025-10-14 10:24:28.56617455 +0000 UTC m=+0.106666773 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 14 06:24:28 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:24:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "snap_name": "c3ae19e7-2432-46f3-9a1f-47362b927069_f1a20323-f365-4980-b571-dd6902116e19", "force": true, "format": "json"}]: dispatch Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069_f1a20323-f365-4980-b571-dd6902116e19, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta' Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069_f1a20323-f365-4980-b571-dd6902116e19, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:28 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "snap_name": "c3ae19e7-2432-46f3-9a1f-47362b927069", "force": true, "format": "json"}]: dispatch Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta.tmp' to config b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414/.meta' Oct 14 06:24:28 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c3ae19e7-2432-46f3-9a1f-47362b927069, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 59 KiB/s wr, 4 op/s Oct 14 06:24:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "snap_name": "ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35", "target_sub_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, target_sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 3b1e09ef-63f3-42f5-9be3-44cc8ce70906 for path b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, target_sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:24:29 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.796+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.796+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.796+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.796+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.796+0000 7ff5dd780640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a03b8760-4702-4abd-9b55-1ec500316cae) Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.823+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.823+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.823+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.823+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:29.823+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a03b8760-4702-4abd-9b55-1ec500316cae) -- by 0 seconds Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' Oct 14 06:24:29 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta' Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.936 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.936 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.937 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.937 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:24:29 localhost nova_compute[295778]: 2025-10-14 10:24:29.938 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:24:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3247299255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.388 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.581 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.582 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11330MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.583 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.583 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:24:30 localhost podman[246584]: time="2025-10-14T10:24:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:24:30 localhost podman[246584]: @ - - [14/Oct/2025:10:24:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:24:30 localhost podman[246584]: @ - - [14/Oct/2025:10:24:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18910 "" "Go-http-client/1.1" Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.675 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.676 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:24:30 localhost nova_compute[295778]: 2025-10-14 10:24:30.698 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:24:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:24:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3246629718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.142 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.148 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.170 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.173 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.173 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:24:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:24:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 900 B/s rd, 95 KiB/s wr, 9 op/s Oct 14 06:24:31 localhost nova_compute[295778]: 2025-10-14 10:24:31.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.snap/ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35/bb07da4e-16db-49bf-b6f6-69cdac7ca3b3' to b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/a49ae401-72a3-4dc9-9a99-7826e309c6a6' Oct 14 06:24:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' Oct 14 06:24:31 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.clone_index] untracking 3b1e09ef-63f3-42f5-9be3-44cc8ce70906 Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta.tmp' to config b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae/.meta' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a03b8760-4702-4abd-9b55-1ec500316cae) Oct 14 06:24:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "format": "json"}]: dispatch Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:32 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:32.214+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9354218-d4d2-4f90-aa77-f3cbdae72414' of type subvolume Oct 14 06:24:32 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9354218-d4d2-4f90-aa77-f3cbdae72414' of type subvolume Oct 14 06:24:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9354218-d4d2-4f90-aa77-f3cbdae72414", "force": true, "format": "json"}]: dispatch Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9354218-d4d2-4f90-aa77-f3cbdae72414'' moved to trashcan Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9354218-d4d2-4f90-aa77-f3cbdae72414, vol_name:cephfs) < "" Oct 14 06:24:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e255 do_prune osdmap full prune enabled Oct 14 06:24:32 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e45: np0005486731.swasqz(active, since 16m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:24:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e256 e256: 6 total, 6 up, 6 in Oct 14 06:24:32 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e256: 6 total, 6 up, 6 in Oct 14 06:24:33 localhost openstack_network_exporter[248748]: ERROR 10:24:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:24:33 localhost openstack_network_exporter[248748]: ERROR 10:24:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:24:33 localhost openstack_network_exporter[248748]: ERROR 10:24:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:24:33 localhost openstack_network_exporter[248748]: ERROR 10:24:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:24:33 localhost openstack_network_exporter[248748]: Oct 14 06:24:33 localhost openstack_network_exporter[248748]: ERROR 10:24:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:24:33 localhost openstack_network_exporter[248748]: Oct 14 06:24:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 495 B/s rd, 47 KiB/s wr, 5 op/s Oct 14 06:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:24:34 localhost systemd[1]: tmp-crun.JJ08Yp.mount: Deactivated successfully. Oct 14 06:24:34 localhost podman[345846]: 2025-10-14 10:24:34.547392801 +0000 UTC m=+0.087886898 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:24:34 localhost podman[345845]: 2025-10-14 10:24:34.592300099 +0000 UTC m=+0.136749250 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Oct 14 06:24:34 localhost podman[345845]: 2025-10-14 10:24:34.597132787 +0000 UTC m=+0.141581978 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:24:34 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:24:34 localhost podman[345846]: 2025-10-14 10:24:34.610574902 +0000 UTC m=+0.151069009 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:24:34 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.174 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e256 do_prune osdmap full prune enabled Oct 14 06:24:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e257 e257: 6 total, 6 up, 6 in Oct 14 06:24:35 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e257: 6 total, 6 up, 6 in Oct 14 06:24:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 92 KiB/s wr, 10 op/s Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:24:35 localhost nova_compute[295778]: 2025-10-14 10:24:35.919 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:24:36 localhost nova_compute[295778]: 2025-10-14 10:24:36.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta' Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "format": "json"}]: dispatch Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:37 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:24:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 92 KiB/s wr, 10 op/s Oct 14 06:24:37 localhost podman[345888]: 2025-10-14 10:24:37.543983595 +0000 UTC m=+0.088525703 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid) Oct 14 06:24:37 localhost podman[345888]: 2025-10-14 10:24:37.556704812 +0000 UTC m=+0.101246920 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 14 06:24:37 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:24:37 localhost podman[345889]: 2025-10-14 10:24:37.641122455 +0000 UTC m=+0.180746694 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 14 06:24:37 localhost podman[345889]: 2025-10-14 10:24:37.681075943 +0000 UTC m=+0.220700131 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:24:37 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:24:38 localhost nova_compute[295778]: 2025-10-14 10:24:38.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:38 localhost nova_compute[295778]: 2025-10-14 10:24:38.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:38 localhost nova_compute[295778]: 2025-10-14 10:24:38.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:38 localhost nova_compute[295778]: 2025-10-14 10:24:38.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:24:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:24:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 43 KiB/s wr, 5 op/s Oct 14 06:24:39 localhost podman[346069]: Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.571882502 +0000 UTC m=+0.083109080 container create 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, ceph=True, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7) Oct 14 06:24:39 localhost systemd[1]: Started libpod-conmon-1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28.scope. Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.535095398 +0000 UTC m=+0.046322036 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:24:39 localhost systemd[1]: Started libcrun container. Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.654476758 +0000 UTC m=+0.165703336 container init 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=553, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.669362312 +0000 UTC m=+0.180588880 container start 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, release=553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:24:39 localhost systemd[1]: tmp-crun.r5B077.mount: Deactivated successfully. Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.6696682 +0000 UTC m=+0.180894778 container attach 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container) Oct 14 06:24:39 localhost vigorous_grothendieck[346084]: 167 167 Oct 14 06:24:39 localhost systemd[1]: libpod-1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28.scope: Deactivated successfully. Oct 14 06:24:39 localhost podman[346069]: 2025-10-14 10:24:39.678235477 +0000 UTC m=+0.189462055 container died 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, ceph=True, architecture=x86_64, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 14 06:24:39 localhost podman[346089]: 2025-10-14 10:24:39.788510476 +0000 UTC m=+0.097312717 container remove 1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_grothendieck, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_BRANCH=main, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 14 06:24:39 localhost systemd[1]: libpod-conmon-1b70fcc7c5992b86a3e303329dbb4dc03ee0f10acb17398c1f6c6bb326042c28.scope: Deactivated successfully. Oct 14 06:24:40 localhost podman[346111]: Oct 14 06:24:40 localhost podman[346111]: 2025-10-14 10:24:40.024453069 +0000 UTC m=+0.083889971 container create a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 14 06:24:40 localhost systemd[1]: Started libpod-conmon-a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51.scope. Oct 14 06:24:40 localhost systemd[1]: Started libcrun container. Oct 14 06:24:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dda431f463c752ccca1c7ecee31e1a7492f60f68a53ce50a46e3d6fb5a14e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 14 06:24:40 localhost podman[346111]: 2025-10-14 10:24:39.986816573 +0000 UTC m=+0.046278846 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 14 06:24:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dda431f463c752ccca1c7ecee31e1a7492f60f68a53ce50a46e3d6fb5a14e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 14 06:24:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dda431f463c752ccca1c7ecee31e1a7492f60f68a53ce50a46e3d6fb5a14e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 14 06:24:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605dda431f463c752ccca1c7ecee31e1a7492f60f68a53ce50a46e3d6fb5a14e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 14 06:24:40 localhost podman[346111]: 2025-10-14 10:24:40.095174371 +0000 UTC m=+0.154611283 container init a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, architecture=x86_64, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, release=553, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=) Oct 14 06:24:40 localhost nova_compute[295778]: 2025-10-14 10:24:40.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:40 localhost podman[346111]: 2025-10-14 10:24:40.142026261 +0000 UTC m=+0.201463163 container start a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, GIT_BRANCH=main, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Oct 14 06:24:40 localhost podman[346111]: 2025-10-14 10:24:40.142505884 +0000 UTC m=+0.201942786 container attach a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, release=553, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:24:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73b6bb1a-cfde-4ac8-b211-3387f5434292", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73b6bb1a-cfde-4ac8-b211-3387f5434292/.meta.tmp' Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73b6bb1a-cfde-4ac8-b211-3387f5434292/.meta.tmp' to config b'/volumes/_nogroup/73b6bb1a-cfde-4ac8-b211-3387f5434292/.meta' Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73b6bb1a-cfde-4ac8-b211-3387f5434292", "format": "json"}]: dispatch Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:40 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "snap_name": "c2b1dd95-2114-4ea2-aeeb-956be33dcefe", "format": "json"}]: dispatch Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:24:40 localhost systemd[1]: var-lib-containers-storage-overlay-f5e479ee103db128cfd0ca95c725a97f9f0b0e7c016d022986422ea8d6371948-merged.mount: Deactivated successfully. Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost funny_lehmann[346126]: [ Oct 14 06:24:41 localhost funny_lehmann[346126]: { Oct 14 06:24:41 localhost funny_lehmann[346126]: "available": false, Oct 14 06:24:41 localhost funny_lehmann[346126]: "ceph_device": false, Oct 14 06:24:41 localhost funny_lehmann[346126]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 14 06:24:41 localhost funny_lehmann[346126]: "lsm_data": {}, Oct 14 06:24:41 localhost funny_lehmann[346126]: "lvs": [], Oct 14 06:24:41 localhost funny_lehmann[346126]: "path": "/dev/sr0", Oct 14 06:24:41 localhost funny_lehmann[346126]: "rejected_reasons": [ Oct 14 06:24:41 localhost funny_lehmann[346126]: "Insufficient space (<5GB)", Oct 14 06:24:41 localhost funny_lehmann[346126]: "Has a FileSystem" Oct 14 06:24:41 localhost funny_lehmann[346126]: ], Oct 14 06:24:41 localhost funny_lehmann[346126]: "sys_api": { Oct 14 06:24:41 localhost funny_lehmann[346126]: "actuators": null, Oct 14 06:24:41 localhost funny_lehmann[346126]: "device_nodes": "sr0", Oct 14 06:24:41 localhost funny_lehmann[346126]: "human_readable_size": "482.00 KB", Oct 14 06:24:41 localhost funny_lehmann[346126]: "id_bus": "ata", Oct 14 06:24:41 localhost funny_lehmann[346126]: "model": "QEMU DVD-ROM", Oct 14 06:24:41 localhost funny_lehmann[346126]: "nr_requests": "2", Oct 14 06:24:41 localhost funny_lehmann[346126]: "partitions": {}, Oct 14 06:24:41 localhost funny_lehmann[346126]: "path": "/dev/sr0", Oct 14 06:24:41 localhost funny_lehmann[346126]: "removable": "1", Oct 14 06:24:41 localhost funny_lehmann[346126]: "rev": "2.5+", Oct 14 06:24:41 localhost funny_lehmann[346126]: "ro": "0", Oct 14 06:24:41 localhost funny_lehmann[346126]: "rotational": "1", Oct 14 06:24:41 localhost funny_lehmann[346126]: "sas_address": "", Oct 14 06:24:41 localhost funny_lehmann[346126]: "sas_device_handle": "", Oct 14 06:24:41 localhost funny_lehmann[346126]: "scheduler_mode": "mq-deadline", Oct 14 06:24:41 localhost funny_lehmann[346126]: "sectors": 0, Oct 14 06:24:41 localhost funny_lehmann[346126]: "sectorsize": "2048", Oct 14 06:24:41 localhost funny_lehmann[346126]: "size": 493568.0, Oct 14 06:24:41 localhost funny_lehmann[346126]: "support_discard": "0", Oct 14 06:24:41 localhost funny_lehmann[346126]: "type": "disk", Oct 14 06:24:41 localhost funny_lehmann[346126]: "vendor": "QEMU" Oct 14 06:24:41 localhost funny_lehmann[346126]: } Oct 14 06:24:41 localhost funny_lehmann[346126]: } Oct 14 06:24:41 localhost funny_lehmann[346126]: ] Oct 14 06:24:41 localhost systemd[1]: libpod-a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51.scope: Deactivated successfully. Oct 14 06:24:41 localhost systemd[1]: libpod-a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51.scope: Consumed 1.128s CPU time. Oct 14 06:24:41 localhost podman[346111]: 2025-10-14 10:24:41.266809728 +0000 UTC m=+1.326246670 container died a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public) Oct 14 06:24:41 localhost systemd[1]: var-lib-containers-storage-overlay-605dda431f463c752ccca1c7ecee31e1a7492f60f68a53ce50a46e3d6fb5a14e-merged.mount: Deactivated successfully. Oct 14 06:24:41 localhost podman[348199]: 2025-10-14 10:24:41.371001146 +0000 UTC m=+0.091945335 container remove a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_lehmann, architecture=x86_64, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, release=553, CEPH_POINT_RELEASE=, ceph=True, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 14 06:24:41 localhost systemd[1]: libpod-conmon-a458e3dd89d955088ee9f200e25957fa4ad11cab268b39107f1a551f22ac1e51.scope: Deactivated successfully. Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 4052cd34-1cd4-46b3-a9e5-b913ccd2ff8c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:24:41 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 4052cd34-1cd4-46b3-a9e5-b913ccd2ff8c (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:24:41 localhost ceph-mgr[300442]: [progress INFO root] Completed event 4052cd34-1cd4-46b3-a9e5-b913ccd2ff8c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:24:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:24:41 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:24:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 924 B/s rd, 55 KiB/s wr, 6 op/s Oct 14 06:24:41 localhost nova_compute[295778]: 2025-10-14 10:24:41.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:24:41 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:41 localhost nova_compute[295778]: 2025-10-14 10:24:41.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:41 localhost nova_compute[295778]: 2025-10-14 10:24:41.925 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:42 localhost nova_compute[295778]: 2025-10-14 10:24:42.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:42 localhost nova_compute[295778]: 2025-10-14 10:24:42.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:24:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 49 KiB/s wr, 5 op/s Oct 14 06:24:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c88380a9-7a0f-47e5-b976-0829cdb6533c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:43.854 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:24:43 localhost nova_compute[295778]: 2025-10-14 10:24:43.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:43 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:43.856 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c88380a9-7a0f-47e5-b976-0829cdb6533c/.meta.tmp' Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c88380a9-7a0f-47e5-b976-0829cdb6533c/.meta.tmp' to config b'/volumes/_nogroup/c88380a9-7a0f-47e5-b976-0829cdb6533c/.meta' Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c88380a9-7a0f-47e5-b976-0829cdb6533c", "format": "json"}]: dispatch Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:44 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:24:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:24:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf143353-3a8e-489e-9d65-b4c5597b2337", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf143353-3a8e-489e-9d65-b4c5597b2337/.meta.tmp' Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf143353-3a8e-489e-9d65-b4c5597b2337/.meta.tmp' to config b'/volumes/_nogroup/bf143353-3a8e-489e-9d65-b4c5597b2337/.meta' Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:44 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf143353-3a8e-489e-9d65-b4c5597b2337", "format": "json"}]: dispatch Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:44 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:45 localhost nova_compute[295778]: 2025-10-14 10:24:45.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e257 do_prune osdmap full prune enabled Oct 14 06:24:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 e258: 6 total, 6 up, 6 in Oct 14 06:24:45 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e258: 6 total, 6 up, 6 in Oct 14 06:24:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 4 op/s Oct 14 06:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:24:46 localhost nova_compute[295778]: 2025-10-14 10:24:46.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:46 localhost systemd[1]: tmp-crun.RIuRDN.mount: Deactivated successfully. Oct 14 06:24:46 localhost podman[348232]: 2025-10-14 10:24:46.6113842 +0000 UTC m=+0.147586116 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal) Oct 14 06:24:46 localhost podman[348234]: 2025-10-14 10:24:46.541765897 +0000 UTC m=+0.073573777 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:24:46 localhost podman[348232]: 2025-10-14 10:24:46.658166358 +0000 UTC m=+0.194368314 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container) Oct 14 06:24:46 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:24:46 localhost podman[348234]: 2025-10-14 10:24:46.673148145 +0000 UTC m=+0.204956085 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:24:46 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:24:46 localhost podman[348233]: 2025-10-14 10:24:46.627830776 +0000 UTC m=+0.162283036 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:24:46 localhost podman[348233]: 2025-10-14 10:24:46.757590099 +0000 UTC m=+0.292042339 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:24:46 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:24:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c88380a9-7a0f-47e5-b976-0829cdb6533c", "format": "json"}]: dispatch Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:47.303+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c88380a9-7a0f-47e5-b976-0829cdb6533c' of type subvolume Oct 14 06:24:47 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c88380a9-7a0f-47e5-b976-0829cdb6533c' of type subvolume Oct 14 06:24:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c88380a9-7a0f-47e5-b976-0829cdb6533c", "force": true, "format": "json"}]: dispatch Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c88380a9-7a0f-47e5-b976-0829cdb6533c'' moved to trashcan Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c88380a9-7a0f-47e5-b976-0829cdb6533c, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 4 op/s Oct 14 06:24:47 localhost systemd[1]: tmp-crun.uYGSjy.mount: Deactivated successfully. Oct 14 06:24:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf143353-3a8e-489e-9d65-b4c5597b2337", "format": "json"}]: dispatch Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf143353-3a8e-489e-9d65-b4c5597b2337, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf143353-3a8e-489e-9d65-b4c5597b2337, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:47.905+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf143353-3a8e-489e-9d65-b4c5597b2337' of type subvolume Oct 14 06:24:47 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf143353-3a8e-489e-9d65-b4c5597b2337' of type subvolume Oct 14 06:24:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf143353-3a8e-489e-9d65-b4c5597b2337", "force": true, "format": "json"}]: dispatch Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf143353-3a8e-489e-9d65-b4c5597b2337'' moved to trashcan Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf143353-3a8e-489e-9d65-b4c5597b2337, vol_name:cephfs) < "" Oct 14 06:24:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 4 op/s Oct 14 06:24:49 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:49.858 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.975 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:24:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:24:50 localhost nova_compute[295778]: 2025-10-14 10:24:50.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73b6bb1a-cfde-4ac8-b211-3387f5434292", "format": "json"}]: dispatch Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:50 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:50.533+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73b6bb1a-cfde-4ac8-b211-3387f5434292' of type subvolume Oct 14 06:24:50 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73b6bb1a-cfde-4ac8-b211-3387f5434292' of type subvolume Oct 14 06:24:50 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73b6bb1a-cfde-4ac8-b211-3387f5434292", "force": true, "format": "json"}]: dispatch Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73b6bb1a-cfde-4ac8-b211-3387f5434292'' moved to trashcan Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:50 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73b6bb1a-cfde-4ac8-b211-3387f5434292, vol_name:cephfs) < "" Oct 14 06:24:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "159dc3c6-4e97-45aa-b569-928af482a1f1", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/159dc3c6-4e97-45aa-b569-928af482a1f1/.meta.tmp' Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/159dc3c6-4e97-45aa-b569-928af482a1f1/.meta.tmp' to config b'/volumes/_nogroup/159dc3c6-4e97-45aa-b569-928af482a1f1/.meta' Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:51 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "159dc3c6-4e97-45aa-b569-928af482a1f1", "format": "json"}]: dispatch Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:51 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:51 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 79 KiB/s wr, 4 op/s Oct 14 06:24:51 localhost nova_compute[295778]: 2025-10-14 10:24:51.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 79 KiB/s wr, 4 op/s Oct 14 06:24:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9b17e37f-7694-46e4-a5af-a040c6854552", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9b17e37f-7694-46e4-a5af-a040c6854552/.meta.tmp' Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9b17e37f-7694-46e4-a5af-a040c6854552/.meta.tmp' to config b'/volumes/_nogroup/9b17e37f-7694-46e4-a5af-a040c6854552/.meta' Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:53 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9b17e37f-7694-46e4-a5af-a040c6854552", "format": "json"}]: dispatch Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:53 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:53 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:53 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "159dc3c6-4e97-45aa-b569-928af482a1f1", "format": "json"}]: dispatch Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:159dc3c6-4e97-45aa-b569-928af482a1f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:159dc3c6-4e97-45aa-b569-928af482a1f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:54 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:54.583+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '159dc3c6-4e97-45aa-b569-928af482a1f1' of type subvolume Oct 14 06:24:54 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '159dc3c6-4e97-45aa-b569-928af482a1f1' of type subvolume Oct 14 06:24:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "159dc3c6-4e97-45aa-b569-928af482a1f1", "force": true, "format": "json"}]: dispatch Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/159dc3c6-4e97-45aa-b569-928af482a1f1'' moved to trashcan Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:159dc3c6-4e97-45aa-b569-928af482a1f1, vol_name:cephfs) < "" Oct 14 06:24:55 localhost nova_compute[295778]: 2025-10-14 10:24:55.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:24:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 599 B/s rd, 78 KiB/s wr, 5 op/s Oct 14 06:24:56 localhost nova_compute[295778]: 2025-10-14 10:24:56.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:24:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9b17e37f-7694-46e4-a5af-a040c6854552", "format": "json"}]: dispatch Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9b17e37f-7694-46e4-a5af-a040c6854552, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9b17e37f-7694-46e4-a5af-a040c6854552, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:24:57 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:24:57.316+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9b17e37f-7694-46e4-a5af-a040c6854552' of type subvolume Oct 14 06:24:57 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9b17e37f-7694-46e4-a5af-a040c6854552' of type subvolume Oct 14 06:24:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9b17e37f-7694-46e4-a5af-a040c6854552", "force": true, "format": "json"}]: dispatch Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9b17e37f-7694-46e4-a5af-a040c6854552'' moved to trashcan Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9b17e37f-7694-46e4-a5af-a040c6854552, vol_name:cephfs) < "" Oct 14 06:24:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 4 op/s Oct 14 06:24:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:57.649 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:24:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:57.649 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:24:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:24:57.649 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:24:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:24:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:24:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9/.meta.tmp' Oct 14 06:24:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9/.meta.tmp' to config b'/volumes/_nogroup/86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9/.meta' Oct 14 06:24:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:24:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9", "format": "json"}]: dispatch Oct 14 06:24:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:24:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:24:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:24:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:24:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:24:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 4 op/s Oct 14 06:24:59 localhost podman[348299]: 2025-10-14 10:24:59.538197835 +0000 UTC m=+0.082951706 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Oct 14 06:24:59 localhost podman[348299]: 2025-10-14 10:24:59.548708343 +0000 UTC m=+0.093462184 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true) Oct 14 06:24:59 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:25:00 localhost nova_compute[295778]: 2025-10-14 10:25:00.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:00 localhost podman[246584]: time="2025-10-14T10:25:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:25:00 localhost podman[246584]: @ - - [14/Oct/2025:10:25:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:25:00 localhost podman[246584]: @ - - [14/Oct/2025:10:25:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18922 "" "Go-http-client/1.1" Oct 14 06:25:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9", "format": "json"}]: dispatch Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:01 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9' of type subvolume Oct 14 06:25:01 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:25:01.164+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9' of type subvolume Oct 14 06:25:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9", "force": true, "format": "json"}]: dispatch Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9'' moved to trashcan Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86e80ff0-6c5f-4c6b-93c7-2a4acb3e45d9, vol_name:cephfs) < "" Oct 14 06:25:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 90 KiB/s wr, 6 op/s Oct 14 06:25:01 localhost nova_compute[295778]: 2025-10-14 10:25:01.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:03 localhost openstack_network_exporter[248748]: ERROR 10:25:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:25:03 localhost openstack_network_exporter[248748]: ERROR 10:25:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:25:03 localhost openstack_network_exporter[248748]: ERROR 10:25:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:25:03 localhost openstack_network_exporter[248748]: ERROR 10:25:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:25:03 localhost openstack_network_exporter[248748]: Oct 14 06:25:03 localhost openstack_network_exporter[248748]: ERROR 10:25:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:25:03 localhost openstack_network_exporter[248748]: Oct 14 06:25:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 65 KiB/s wr, 5 op/s Oct 14 06:25:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0095c088-6d38-4b6e-960a-160983ddb0a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0095c088-6d38-4b6e-960a-160983ddb0a9/.meta.tmp' Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0095c088-6d38-4b6e-960a-160983ddb0a9/.meta.tmp' to config b'/volumes/_nogroup/0095c088-6d38-4b6e-960a-160983ddb0a9/.meta' Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0095c088-6d38-4b6e-960a-160983ddb0a9", "format": "json"}]: dispatch Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:04 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:25:04 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:25:05 localhost nova_compute[295778]: 2025-10-14 10:25:05.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:25:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 91 KiB/s wr, 7 op/s Oct 14 06:25:05 localhost podman[348319]: 2025-10-14 10:25:05.551166647 +0000 UTC m=+0.088557175 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 14 06:25:05 localhost podman[348319]: 2025-10-14 10:25:05.560198655 +0000 UTC m=+0.097589213 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 14 06:25:05 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:25:05 localhost podman[348320]: 2025-10-14 10:25:05.653192706 +0000 UTC m=+0.186619999 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:25:05 localhost podman[348320]: 2025-10-14 10:25:05.66694005 +0000 UTC m=+0.200367333 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:25:05 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:25:06 localhost nova_compute[295778]: 2025-10-14 10:25:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s Oct 14 06:25:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0095c088-6d38-4b6e-960a-160983ddb0a9", "format": "json"}]: dispatch Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0095c088-6d38-4b6e-960a-160983ddb0a9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0095c088-6d38-4b6e-960a-160983ddb0a9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:07 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:25:07.906+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0095c088-6d38-4b6e-960a-160983ddb0a9' of type subvolume Oct 14 06:25:07 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0095c088-6d38-4b6e-960a-160983ddb0a9' of type subvolume Oct 14 06:25:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0095c088-6d38-4b6e-960a-160983ddb0a9", "force": true, "format": "json"}]: dispatch Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0095c088-6d38-4b6e-960a-160983ddb0a9'' moved to trashcan Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0095c088-6d38-4b6e-960a-160983ddb0a9, vol_name:cephfs) < "" Oct 14 06:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:25:08 localhost podman[348363]: 2025-10-14 10:25:08.55754477 +0000 UTC m=+0.097209803 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:25:08 localhost podman[348363]: 2025-10-14 10:25:08.57454726 +0000 UTC m=+0.114212343 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd) Oct 14 06:25:08 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:25:08 localhost podman[348362]: 2025-10-14 10:25:08.588816167 +0000 UTC m=+0.132770974 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:25:08 localhost podman[348362]: 2025-10-14 10:25:08.60213625 +0000 UTC m=+0.146091057 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:25:08 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:25:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:25:09 Oct 14 06:25:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:25:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:25:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['.mgr', 'volumes', 'manila_metadata', 'backups', 'manila_data', 'images', 'vms'] Oct 14 06:25:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:25:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0017734614914852037 of space, bias 4.0, pg target 1.411675347222222 quantized to 16 (current 16) Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:25:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:25:10 localhost nova_compute[295778]: 2025-10-14 10:25:10.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "40a3d0c7-a8ab-4f98-b9ab-569099aeb089", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/40a3d0c7-a8ab-4f98-b9ab-569099aeb089/.meta.tmp' Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/40a3d0c7-a8ab-4f98-b9ab-569099aeb089/.meta.tmp' to config b'/volumes/_nogroup/40a3d0c7-a8ab-4f98-b9ab-569099aeb089/.meta' Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:11 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "40a3d0c7-a8ab-4f98-b9ab-569099aeb089", "format": "json"}]: dispatch Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:11 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:11 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:25:11 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:25:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 68 KiB/s wr, 5 op/s Oct 14 06:25:11 localhost nova_compute[295778]: 2025-10-14 10:25:11.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s Oct 14 06:25:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "40a3d0c7-a8ab-4f98-b9ab-569099aeb089", "format": "json"}]: dispatch Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:14 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:25:14.589+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40a3d0c7-a8ab-4f98-b9ab-569099aeb089' of type subvolume Oct 14 06:25:14 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '40a3d0c7-a8ab-4f98-b9ab-569099aeb089' of type subvolume Oct 14 06:25:14 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "40a3d0c7-a8ab-4f98-b9ab-569099aeb089", "force": true, "format": "json"}]: dispatch Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/40a3d0c7-a8ab-4f98-b9ab-569099aeb089'' moved to trashcan Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:14 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:40a3d0c7-a8ab-4f98-b9ab-569099aeb089, vol_name:cephfs) < "" Oct 14 06:25:15 localhost nova_compute[295778]: 2025-10-14 10:25:15.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 73 KiB/s wr, 4 op/s Oct 14 06:25:16 localhost nova_compute[295778]: 2025-10-14 10:25:16.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:25:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 47 KiB/s wr, 2 op/s Oct 14 06:25:17 localhost podman[348400]: 2025-10-14 10:25:17.554325937 +0000 UTC m=+0.093630530 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6) Oct 14 06:25:17 localhost podman[348400]: 2025-10-14 10:25:17.596222515 +0000 UTC m=+0.135527128 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 14 06:25:17 localhost podman[348402]: 2025-10-14 10:25:17.607024941 +0000 UTC m=+0.137370586 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:25:17 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:25:17 localhost podman[348402]: 2025-10-14 10:25:17.619120241 +0000 UTC m=+0.149465886 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 14 06:25:17 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:25:17 localhost podman[348401]: 2025-10-14 10:25:17.711851495 +0000 UTC m=+0.244984384 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 14 06:25:17 localhost podman[348401]: 2025-10-14 10:25:17.755573463 +0000 UTC m=+0.288706302 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 14 06:25:17 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:25:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "snap_name": "c2b1dd95-2114-4ea2-aeeb-956be33dcefe_a607055b-5edb-478a-940f-df0f28aaea2b", "force": true, "format": "json"}]: dispatch Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe_a607055b-5edb-478a-940f-df0f28aaea2b, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta' Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe_a607055b-5edb-478a-940f-df0f28aaea2b, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:18 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "snap_name": "c2b1dd95-2114-4ea2-aeeb-956be33dcefe", "force": true, "format": "json"}]: dispatch Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta.tmp' to config b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088/.meta' Oct 14 06:25:18 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c2b1dd95-2114-4ea2-aeeb-956be33dcefe, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 47 KiB/s wr, 2 op/s Oct 14 06:25:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:20 localhost nova_compute[295778]: 2025-10-14 10:25:20.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 66 KiB/s wr, 4 op/s Oct 14 06:25:21 localhost nova_compute[295778]: 2025-10-14 10:25:21.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "format": "json"}]: dispatch Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:22 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:25:22.065+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81ec204c-5a9b-4914-8e15-0dd2a0b60088' of type subvolume Oct 14 06:25:22 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81ec204c-5a9b-4914-8e15-0dd2a0b60088' of type subvolume Oct 14 06:25:22 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81ec204c-5a9b-4914-8e15-0dd2a0b60088", "force": true, "format": "json"}]: dispatch Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81ec204c-5a9b-4914-8e15-0dd2a0b60088'' moved to trashcan Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:22 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81ec204c-5a9b-4914-8e15-0dd2a0b60088, vol_name:cephfs) < "" Oct 14 06:25:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 47 KiB/s wr, 3 op/s Oct 14 06:25:24 localhost ovn_controller[156286]: 2025-10-14T10:25:24Z|00429|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Oct 14 06:25:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:25 localhost nova_compute[295778]: 2025-10-14 10:25:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 60 KiB/s wr, 5 op/s Oct 14 06:25:26 localhost nova_compute[295778]: 2025-10-14 10:25:26.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e258 do_prune osdmap full prune enabled Oct 14 06:25:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e259 e259: 6 total, 6 up, 6 in Oct 14 06:25:27 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e259: 6 total, 6 up, 6 in Oct 14 06:25:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s Oct 14 06:25:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.931 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.932 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:25:29 localhost nova_compute[295778]: 2025-10-14 10:25:29.932 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:25:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:25:30 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4063698005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.395 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:25:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:25:30 localhost podman[348489]: 2025-10-14 10:25:30.527921419 +0000 UTC m=+0.074645866 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Oct 14 06:25:30 localhost podman[348489]: 2025-10-14 10:25:30.534268387 +0000 UTC m=+0.080992824 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:25:30 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.585 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.586 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11315MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.587 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.587 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:25:30 localhost podman[246584]: time="2025-10-14T10:25:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:25:30 localhost podman[246584]: @ - - [14/Oct/2025:10:25:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:25:30 localhost podman[246584]: @ - - [14/Oct/2025:10:25:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18916 "" "Go-http-client/1.1" Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.650 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.650 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:25:30 localhost nova_compute[295778]: 2025-10-14 10:25:30.673 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:25:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:25:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1314888092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.115 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.122 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.150 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.153 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.153 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:25:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:25:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 23 KiB/s wr, 2 op/s Oct 14 06:25:31 localhost nova_compute[295778]: 2025-10-14 10:25:31.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:25:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:25:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:25:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:25:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:25:33 localhost openstack_network_exporter[248748]: ERROR 10:25:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:25:33 localhost openstack_network_exporter[248748]: ERROR 10:25:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:25:33 localhost openstack_network_exporter[248748]: ERROR 10:25:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:25:33 localhost openstack_network_exporter[248748]: ERROR 10:25:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:25:33 localhost openstack_network_exporter[248748]: Oct 14 06:25:33 localhost openstack_network_exporter[248748]: ERROR 10:25:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:25:33 localhost openstack_network_exporter[248748]: Oct 14 06:25:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 23 KiB/s wr, 2 op/s Oct 14 06:25:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "format": "json"}]: dispatch Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a03b8760-4702-4abd-9b55-1ec500316cae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:34 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a03b8760-4702-4abd-9b55-1ec500316cae", "force": true, "format": "json"}]: dispatch Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a03b8760-4702-4abd-9b55-1ec500316cae'' moved to trashcan Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:34 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a03b8760-4702-4abd-9b55-1ec500316cae, vol_name:cephfs) < "" Oct 14 06:25:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e259 do_prune osdmap full prune enabled Oct 14 06:25:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e260 e260: 6 total, 6 up, 6 in Oct 14 06:25:35 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e260: 6 total, 6 up, 6 in Oct 14 06:25:35 localhost nova_compute[295778]: 2025-10-14 10:25:35.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 125 B/s rd, 42 KiB/s wr, 1 op/s Oct 14 06:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:25:36 localhost podman[348531]: 2025-10-14 10:25:36.541038285 +0000 UTC m=+0.080914102 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 14 06:25:36 localhost podman[348531]: 2025-10-14 10:25:36.550245429 +0000 UTC m=+0.090121246 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 14 06:25:36 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:25:36 localhost podman[348532]: 2025-10-14 10:25:36.595330772 +0000 UTC m=+0.132141748 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:25:36 localhost podman[348532]: 2025-10-14 10:25:36.604643108 +0000 UTC m=+0.141454084 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 14 06:25:36 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:25:36 localhost nova_compute[295778]: 2025-10-14 10:25:36.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:37 localhost nova_compute[295778]: 2025-10-14 10:25:37.153 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "snap_name": "ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35_0d0de2ac-a0d4-4ab6-bde5-b2dfe0ca6374", "force": true, "format": "json"}]: dispatch Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35_0d0de2ac-a0d4-4ab6-bde5-b2dfe0ca6374, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta' Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35_0d0de2ac-a0d4-4ab6-bde5-b2dfe0ca6374, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "snap_name": "ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35", "force": true, "format": "json"}]: dispatch Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta.tmp' to config b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a/.meta' Oct 14 06:25:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 34 KiB/s wr, 1 op/s Oct 14 06:25:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff2ce27f-716d-4b97-9eb1-d5d1f6efbc35, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:37 localhost nova_compute[295778]: 2025-10-14 10:25:37.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:37 localhost nova_compute[295778]: 2025-10-14 10:25:37.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:25:37 localhost nova_compute[295778]: 2025-10-14 10:25:37.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:25:37 localhost nova_compute[295778]: 2025-10-14 10:25:37.917 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:25:38 localhost nova_compute[295778]: 2025-10-14 10:25:38.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:38 localhost nova_compute[295778]: 2025-10-14 10:25:38.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:38 localhost nova_compute[295778]: 2025-10-14 10:25:38.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:25:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 34 KiB/s wr, 1 op/s Oct 14 06:25:39 localhost podman[348573]: 2025-10-14 10:25:39.548131288 +0000 UTC m=+0.087348832 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009) Oct 14 06:25:39 localhost podman[348573]: 2025-10-14 10:25:39.560218827 +0000 UTC m=+0.099436371 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251009) Oct 14 06:25:39 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:25:39 localhost podman[348574]: 2025-10-14 10:25:39.608140666 +0000 UTC m=+0.143358005 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 14 06:25:39 localhost podman[348574]: 2025-10-14 10:25:39.647077276 +0000 UTC m=+0.182294585 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 14 06:25:39 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:25:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:25:39 localhost nova_compute[295778]: 2025-10-14 10:25:39.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:40 localhost nova_compute[295778]: 2025-10-14 10:25:40.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "23258dac-df20-457d-903e-dcae3603356a", "format": "json"}]: dispatch Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:23258dac-df20-457d-903e-dcae3603356a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:23258dac-df20-457d-903e-dcae3603356a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:25:40 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:25:40.701+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23258dac-df20-457d-903e-dcae3603356a' of type subvolume Oct 14 06:25:40 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23258dac-df20-457d-903e-dcae3603356a' of type subvolume Oct 14 06:25:40 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "23258dac-df20-457d-903e-dcae3603356a", "force": true, "format": "json"}]: dispatch Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/23258dac-df20-457d-903e-dcae3603356a'' moved to trashcan Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:25:40 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23258dac-df20-457d-903e-dcae3603356a, vol_name:cephfs) < "" Oct 14 06:25:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 43 KiB/s wr, 3 op/s Oct 14 06:25:41 localhost nova_compute[295778]: 2025-10-14 10:25:41.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:25:42 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 930ece1a-a8e9-4e79-95bc-99011ba4c962 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:25:42 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 930ece1a-a8e9-4e79-95bc-99011ba4c962 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:25:42 localhost ceph-mgr[300442]: [progress INFO root] Completed event 930ece1a-a8e9-4e79-95bc-99011ba4c962 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e260 do_prune osdmap full prune enabled Oct 14 06:25:42 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:25:42 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:25:42 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e261 e261: 6 total, 6 up, 6 in Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e46: np0005486731.swasqz(active, since 17m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:25:42 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e261: 6 total, 6 up, 6 in Oct 14 06:25:42 localhost nova_compute[295778]: 2025-10-14 10:25:42.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:42 localhost nova_compute[295778]: 2025-10-14 10:25:42.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 498 B/s rd, 20 KiB/s wr, 3 op/s Oct 14 06:25:43 localhost nova_compute[295778]: 2025-10-14 10:25:43.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:25:44 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:25:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:25:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:25:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:45 localhost nova_compute[295778]: 2025-10-14 10:25:45.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 43 KiB/s wr, 4 op/s Oct 14 06:25:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:25:46 localhost nova_compute[295778]: 2025-10-14 10:25:46.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 43 KiB/s wr, 4 op/s Oct 14 06:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:25:48 localhost podman[348699]: 2025-10-14 10:25:48.562263066 +0000 UTC m=+0.099275219 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.build-date=20251009) Oct 14 06:25:48 localhost podman[348698]: 2025-10-14 10:25:48.633787158 +0000 UTC m=+0.170548515 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 14 06:25:48 localhost podman[348700]: 2025-10-14 10:25:48.644647705 +0000 UTC m=+0.175213628 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:25:48 localhost podman[348698]: 2025-10-14 10:25:48.652048062 +0000 UTC m=+0.188809459 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 14 06:25:48 localhost podman[348700]: 2025-10-14 10:25:48.660164996 +0000 UTC m=+0.190730929 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:25:48 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:25:48 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:25:48 localhost podman[348699]: 2025-10-14 10:25:48.707095368 +0000 UTC m=+0.244107581 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 14 06:25:48 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:25:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 43 KiB/s wr, 4 op/s Oct 14 06:25:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e261 do_prune osdmap full prune enabled Oct 14 06:25:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 e262: 6 total, 6 up, 6 in Oct 14 06:25:50 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e262: 6 total, 6 up, 6 in Oct 14 06:25:50 localhost nova_compute[295778]: 2025-10-14 10:25:50.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 234 B/s rd, 38 KiB/s wr, 2 op/s Oct 14 06:25:51 localhost nova_compute[295778]: 2025-10-14 10:25:51.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 33 KiB/s wr, 2 op/s Oct 14 06:25:54 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:25:54 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:25:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:25:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "format": "json"}]: dispatch Oct 14 06:25:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:25:55 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:25:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:25:55 localhost nova_compute[295778]: 2025-10-14 10:25:55.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Oct 14 06:25:56 localhost nova_compute[295778]: 2025-10-14 10:25:56.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:25:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Oct 14 06:25:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta' Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:25:57 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "format": "json"}]: dispatch Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:25:57 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:25:57 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:25:57 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:25:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:25:57.650 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:25:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:25:57.651 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:25:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:25:57.651 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:25:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1", "format": "json"}]: dispatch Oct 14 06:25:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:25:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Oct 14 06:26:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:00 localhost nova_compute[295778]: 2025-10-14 10:26:00.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:00 localhost podman[246584]: time="2025-10-14T10:26:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:26:00 localhost podman[246584]: @ - - [14/Oct/2025:10:26:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:26:00 localhost podman[246584]: @ - - [14/Oct/2025:10:26:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18919 "" "Go-http-client/1.1" Oct 14 06:26:00 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "snap_name": "8bdda4f6-7878-494d-ab7c-e9a419ae575a", "format": "json"}]: dispatch Oct 14 06:26:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:26:00 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:26:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s Oct 14 06:26:01 localhost podman[348763]: 2025-10-14 10:26:01.543632914 +0000 UTC m=+0.084009215 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009) Oct 14 06:26:01 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "dc501759-1db6-4c86-b98d-1c342bd0fc4f", "format": "json"}]: dispatch Oct 14 06:26:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:01 localhost podman[348763]: 2025-10-14 10:26:01.58316564 +0000 UTC m=+0.123541901 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:26:01 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:01 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:26:01 localhost nova_compute[295778]: 2025-10-14 10:26:01.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:03 localhost openstack_network_exporter[248748]: ERROR 10:26:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:26:03 localhost openstack_network_exporter[248748]: ERROR 10:26:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:26:03 localhost openstack_network_exporter[248748]: ERROR 10:26:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:26:03 localhost openstack_network_exporter[248748]: ERROR 10:26:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:26:03 localhost openstack_network_exporter[248748]: Oct 14 06:26:03 localhost openstack_network_exporter[248748]: ERROR 10:26:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:26:03 localhost openstack_network_exporter[248748]: Oct 14 06:26:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s wr, 1 op/s Oct 14 06:26:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "snap_name": "8bdda4f6-7878-494d-ab7c-e9a419ae575a", "target_sub_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "format": "json"}]: dispatch Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, target_sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.clone_index] tracking-id d72d17fc-621b-47a3-a4de-c7bcc1eea034 for path b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, target_sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:26:04 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "format": "json"}]: dispatch Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.492+0000 7ff5dcf7f640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.492+0000 7ff5dcf7f640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.492+0000 7ff5dcf7f640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.492+0000 7ff5dcf7f640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.492+0000 7ff5dcf7f640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93 Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 5189cc9c-7439-49a5-8389-5033e305ed93) Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.522+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.522+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.522+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.522+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:04.522+0000 7ff5ddf81640 -1 client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: client.0 error registering admin socket command: (17) File exists Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 5189cc9c-7439-49a5-8389-5033e305ed93) -- by 0 seconds Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' Oct 14 06:26:04 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta' Oct 14 06:26:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:05 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "dc501759-1db6-4c86-b98d-1c342bd0fc4f_93cae88d-de4b-457c-9df0-35470f7ceb85", "force": true, "format": "json"}]: dispatch Oct 14 06:26:05 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f_93cae88d-de4b-457c-9df0-35470f7ceb85, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 52 KiB/s wr, 3 op/s Oct 14 06:26:05 localhost nova_compute[295778]: 2025-10-14 10:26:05.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:06 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e47: np0005486731.swasqz(active, since 17m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:26:06 localhost nova_compute[295778]: 2025-10-14 10:26:06.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.snap/8bdda4f6-7878-494d-ab7c-e9a419ae575a/4992bdc3-1d1f-4bb2-97a5-864fe25e3726' to b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/50c5295f-b3d1-46b4-9d7f-b8045cd65d43' Oct 14 06:26:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f_93cae88d-de4b-457c-9df0-35470f7ceb85, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:07 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "dc501759-1db6-4c86-b98d-1c342bd0fc4f", "force": true, "format": "json"}]: dispatch Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:dc501759-1db6-4c86-b98d-1c342bd0fc4f, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.clone_index] untracking d72d17fc-621b-47a3-a4de-c7bcc1eea034 Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta.tmp' to config b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93/.meta' Oct 14 06:26:07 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 5189cc9c-7439-49a5-8389-5033e305ed93) Oct 14 06:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:26:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 43 KiB/s wr, 3 op/s Oct 14 06:26:07 localhost podman[348806]: 2025-10-14 10:26:07.550345709 +0000 UTC m=+0.081442806 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:26:07 localhost podman[348806]: 2025-10-14 10:26:07.585197761 +0000 UTC m=+0.116294858 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:26:07 localhost podman[348807]: 2025-10-14 10:26:07.603582748 +0000 UTC m=+0.130780872 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:26:07 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:26:07 localhost podman[348807]: 2025-10-14 10:26:07.612764321 +0000 UTC m=+0.139962445 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:26:07 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:26:08 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "296544ab-848f-4dcd-9965-77132767207a", "format": "json"}]: dispatch Oct 14 06:26:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:296544ab-848f-4dcd-9965-77132767207a, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:08 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:296544ab-848f-4dcd-9965-77132767207a, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:26:09 Oct 14 06:26:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:26:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:26:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_data', 'manila_metadata', '.mgr', 'backups', 'vms', 'images', 'volumes'] Oct 14 06:26:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 43 KiB/s wr, 3 op/s Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:26:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0019313145589056394 of space, bias 4.0, pg target 1.537326388888889 quantized to 16 (current 16) Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:26:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:26:09 localhost ovn_controller[156286]: 2025-10-14T10:26:09Z|00430|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:26:10 localhost systemd[1]: tmp-crun.VSSnHj.mount: Deactivated successfully. Oct 14 06:26:10 localhost podman[348847]: 2025-10-14 10:26:10.553562979 +0000 UTC m=+0.092555300 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 14 06:26:10 localhost podman[348848]: 2025-10-14 10:26:10.594200304 +0000 UTC m=+0.129717104 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:26:10 localhost podman[348847]: 2025-10-14 10:26:10.641065024 +0000 UTC m=+0.180057305 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:26:10 localhost nova_compute[295778]: 2025-10-14 10:26:10.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:10 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:26:10 localhost podman[348848]: 2025-10-14 10:26:10.693689527 +0000 UTC m=+0.229206317 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:26:10 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:26:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s Oct 14 06:26:11 localhost nova_compute[295778]: 2025-10-14 10:26:11.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e262 do_prune osdmap full prune enabled Oct 14 06:26:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e263 e263: 6 total, 6 up, 6 in Oct 14 06:26:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e263: 6 total, 6 up, 6 in Oct 14 06:26:12 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "296544ab-848f-4dcd-9965-77132767207a_978717b4-b726-45b6-82d0-6225f63f39df", "force": true, "format": "json"}]: dispatch Oct 14 06:26:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:296544ab-848f-4dcd-9965-77132767207a_978717b4-b726-45b6-82d0-6225f63f39df, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:12 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:12 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:296544ab-848f-4dcd-9965-77132767207a_978717b4-b726-45b6-82d0-6225f63f39df, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:13 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "296544ab-848f-4dcd-9965-77132767207a", "force": true, "format": "json"}]: dispatch Oct 14 06:26:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:296544ab-848f-4dcd-9965-77132767207a, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:13 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:13 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:296544ab-848f-4dcd-9965-77132767207a, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 613 B/s rd, 70 KiB/s wr, 6 op/s Oct 14 06:26:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 613 B/s rd, 61 KiB/s wr, 6 op/s Oct 14 06:26:15 localhost nova_compute[295778]: 2025-10-14 10:26:15.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:16 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "bd10e28b-d271-4b9e-9c42-809f483be37c", "format": "json"}]: dispatch Oct 14 06:26:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:16 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:16 localhost nova_compute[295778]: 2025-10-14 10:26:16.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e263 do_prune osdmap full prune enabled Oct 14 06:26:17 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e264 e264: 6 total, 6 up, 6 in Oct 14 06:26:17 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e264: 6 total, 6 up, 6 in Oct 14 06:26:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 76 KiB/s wr, 7 op/s Oct 14 06:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:26:19 localhost podman[348883]: 2025-10-14 10:26:19.549208086 +0000 UTC m=+0.084339733 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Oct 14 06:26:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 127 B/s rd, 33 KiB/s wr, 2 op/s Oct 14 06:26:19 localhost podman[348885]: 2025-10-14 10:26:19.617289887 +0000 UTC m=+0.147047262 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:26:19 localhost podman[348885]: 2025-10-14 10:26:19.625450593 +0000 UTC m=+0.155208028 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:26:19 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:26:19 localhost podman[348883]: 2025-10-14 10:26:19.642948436 +0000 UTC m=+0.178080093 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container) Oct 14 06:26:19 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:26:19 localhost podman[348884]: 2025-10-14 10:26:19.714529231 +0000 UTC m=+0.246864344 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:26:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "bd10e28b-d271-4b9e-9c42-809f483be37c_e419c584-28c1-4e80-a9d6-8e2648445c6c", "force": true, "format": "json"}]: dispatch Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c_e419c584-28c1-4e80-a9d6-8e2648445c6c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c_e419c584-28c1-4e80-a9d6-8e2648445c6c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:19 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "bd10e28b-d271-4b9e-9c42-809f483be37c", "force": true, "format": "json"}]: dispatch Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:19 localhost podman[348884]: 2025-10-14 10:26:19.832161654 +0000 UTC m=+0.364496727 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:26:19 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:26:19 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd10e28b-d271-4b9e-9c42-809f483be37c, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e264 do_prune osdmap full prune enabled Oct 14 06:26:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e265 e265: 6 total, 6 up, 6 in Oct 14 06:26:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e265: 6 total, 6 up, 6 in Oct 14 06:26:20 localhost sshd[348952]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:26:20 localhost nova_compute[295778]: 2025-10-14 10:26:20.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 69 KiB/s wr, 5 op/s Oct 14 06:26:21 localhost nova_compute[295778]: 2025-10-14 10:26:21.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:23 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "057be8a2-faa1-4ec6-999f-1e16099150d8", "format": "json"}]: dispatch Oct 14 06:26:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:23 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 35 KiB/s wr, 2 op/s Oct 14 06:26:24 localhost nova_compute[295778]: 2025-10-14 10:26:24.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:24 localhost nova_compute[295778]: 2025-10-14 10:26:24.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 14 06:26:24 localhost nova_compute[295778]: 2025-10-14 10:26:24.923 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 14 06:26:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 501 B/s rd, 71 KiB/s wr, 4 op/s Oct 14 06:26:25 localhost nova_compute[295778]: 2025-10-14 10:26:25.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:27 localhost nova_compute[295778]: 2025-10-14 10:26:27.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e265 do_prune osdmap full prune enabled Oct 14 06:26:27 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e266 e266: 6 total, 6 up, 6 in Oct 14 06:26:27 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e266: 6 total, 6 up, 6 in Oct 14 06:26:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "057be8a2-faa1-4ec6-999f-1e16099150d8_d4d5de98-7f08-4f79-99e3-9f458588d924", "force": true, "format": "json"}]: dispatch Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8_d4d5de98-7f08-4f79-99e3-9f458588d924, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8_d4d5de98-7f08-4f79-99e3-9f458588d924, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:27 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "057be8a2-faa1-4ec6-999f-1e16099150d8", "force": true, "format": "json"}]: dispatch Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 4 op/s Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:27 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:057be8a2-faa1-4ec6-999f-1e16099150d8, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 222 B/s rd, 32 KiB/s wr, 1 op/s Oct 14 06:26:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e266 do_prune osdmap full prune enabled Oct 14 06:26:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e267 e267: 6 total, 6 up, 6 in Oct 14 06:26:30 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e267: 6 total, 6 up, 6 in Oct 14 06:26:30 localhost podman[246584]: time="2025-10-14T10:26:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:26:30 localhost podman[246584]: @ - - [14/Oct/2025:10:26:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:26:30 localhost podman[246584]: @ - - [14/Oct/2025:10:26:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18912 "" "Go-http-client/1.1" Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.922 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.950 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.951 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.951 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.951 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:26:30 localhost nova_compute[295778]: 2025-10-14 10:26:30.952 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:26:31 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:26:31 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3669681972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:26:31 localhost nova_compute[295778]: 2025-10-14 10:26:31.395 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:26:31 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "format": "json"}]: dispatch Oct 14 06:26:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 56 KiB/s wr, 4 op/s Oct 14 06:26:31 localhost nova_compute[295778]: 2025-10-14 10:26:31.606 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:26:31 localhost nova_compute[295778]: 2025-10-14 10:26:31.608 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11315MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:26:31 localhost nova_compute[295778]: 2025-10-14 10:26:31.608 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:26:31 localhost nova_compute[295778]: 2025-10-14 10:26:31.609 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:26:31 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "format": "json"}]: dispatch Oct 14 06:26:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:26:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:26:32 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:26:32 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.348 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.348 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.376 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing inventories for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 14 06:26:32 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "29d07813-f6d0-498d-9c51-df401e1d9c5d", "format": "json"}]: dispatch Oct 14 06:26:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:32 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:26:32 localhost podman[348976]: 2025-10-14 10:26:32.547575344 +0000 UTC m=+0.087616020 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:26:32 localhost podman[348976]: 2025-10-14 10:26:32.558350969 +0000 UTC m=+0.098391635 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:26:32 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.729 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating ProviderTree inventory for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.729 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Updating inventory in ProviderTree for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.752 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing aggregate associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.773 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Refreshing trait associations for resource provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 14 06:26:32 localhost nova_compute[295778]: 2025-10-14 10:26:32.800 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:26:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:26:33 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1061228469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:26:33 localhost nova_compute[295778]: 2025-10-14 10:26:33.249 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:26:33 localhost nova_compute[295778]: 2025-10-14 10:26:33.255 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:26:33 localhost openstack_network_exporter[248748]: ERROR 10:26:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:26:33 localhost openstack_network_exporter[248748]: ERROR 10:26:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:26:33 localhost openstack_network_exporter[248748]: ERROR 10:26:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:26:33 localhost openstack_network_exporter[248748]: ERROR 10:26:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:26:33 localhost openstack_network_exporter[248748]: Oct 14 06:26:33 localhost openstack_network_exporter[248748]: ERROR 10:26:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:26:33 localhost openstack_network_exporter[248748]: Oct 14 06:26:33 localhost nova_compute[295778]: 2025-10-14 10:26:33.532 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:26:33 localhost nova_compute[295778]: 2025-10-14 10:26:33.535 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:26:33 localhost nova_compute[295778]: 2025-10-14 10:26:33.535 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:26:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 20 KiB/s wr, 2 op/s Oct 14 06:26:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 376 B/s rd, 40 KiB/s wr, 3 op/s Oct 14 06:26:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "23885be9-d24a-4c03-8ad0-d89d769d683a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23885be9-d24a-4c03-8ad0-d89d769d683a/.meta.tmp' Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23885be9-d24a-4c03-8ad0-d89d769d683a/.meta.tmp' to config b'/volumes/_nogroup/23885be9-d24a-4c03-8ad0-d89d769d683a/.meta' Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:35 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "23885be9-d24a-4c03-8ad0-d89d769d683a", "format": "json"}]: dispatch Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:35 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:26:35 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:26:35 localhost nova_compute[295778]: 2025-10-14 10:26:35.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:37 localhost nova_compute[295778]: 2025-10-14 10:26:37.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "29d07813-f6d0-498d-9c51-df401e1d9c5d_48f384d5-3e54-4e3d-a956-7bb40e994e9d", "force": true, "format": "json"}]: dispatch Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d_48f384d5-3e54-4e3d-a956-7bb40e994e9d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e267 do_prune osdmap full prune enabled Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d_48f384d5-3e54-4e3d-a956-7bb40e994e9d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:37 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e268 e268: 6 total, 6 up, 6 in Oct 14 06:26:37 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "29d07813-f6d0-498d-9c51-df401e1d9c5d", "force": true, "format": "json"}]: dispatch Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:37 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e268: 6 total, 6 up, 6 in Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:37 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:29d07813-f6d0-498d-9c51-df401e1d9c5d, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 41 KiB/s wr, 3 op/s Oct 14 06:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:26:38 localhost podman[349017]: 2025-10-14 10:26:38.54981219 +0000 UTC m=+0.090284930 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 14 06:26:38 localhost podman[349017]: 2025-10-14 10:26:38.560092893 +0000 UTC m=+0.100565653 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:26:38 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:26:38 localhost podman[349018]: 2025-10-14 10:26:38.608393281 +0000 UTC m=+0.145283125 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:26:38 localhost podman[349018]: 2025-10-14 10:26:38.645110232 +0000 UTC m=+0.182000046 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:26:38 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:39 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "23885be9-d24a-4c03-8ad0-d89d769d683a", "new_size": 2147483648, "format": "json"}]: dispatch Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.518 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.518 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.519 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:26:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 111 B/s rd, 19 KiB/s wr, 0 op/s Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.703 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.703 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:26:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:39 localhost nova_compute[295778]: 2025-10-14 10:26:39.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e268 do_prune osdmap full prune enabled Oct 14 06:26:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e269 e269: 6 total, 6 up, 6 in Oct 14 06:26:40 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e269: 6 total, 6 up, 6 in Oct 14 06:26:40 localhost nova_compute[295778]: 2025-10-14 10:26:40.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:40 localhost nova_compute[295778]: 2025-10-14 10:26:40.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:40 localhost nova_compute[295778]: 2025-10-14 10:26:40.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:26:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e269 do_prune osdmap full prune enabled Oct 14 06:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:26:41 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e270 e270: 6 total, 6 up, 6 in Oct 14 06:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:26:41 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e270: 6 total, 6 up, 6 in Oct 14 06:26:41 localhost podman[349055]: 2025-10-14 10:26:41.556194134 +0000 UTC m=+0.088633977 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:26:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 86 KiB/s wr, 5 op/s Oct 14 06:26:41 localhost podman[349054]: 2025-10-14 10:26:41.612684309 +0000 UTC m=+0.147210327 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible) Oct 14 06:26:41 localhost podman[349055]: 2025-10-14 10:26:41.623436284 +0000 UTC m=+0.155876107 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d) Oct 14 06:26:41 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:26:41 localhost podman[349054]: 2025-10-14 10:26:41.651203268 +0000 UTC m=+0.185729306 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Oct 14 06:26:41 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:26:41 localhost nova_compute[295778]: 2025-10-14 10:26:41.900 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:42 localhost nova_compute[295778]: 2025-10-14 10:26:42.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:42 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "23885be9-d24a-4c03-8ad0-d89d769d683a", "format": "json"}]: dispatch Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:23885be9-d24a-4c03-8ad0-d89d769d683a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:23885be9-d24a-4c03-8ad0-d89d769d683a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:42 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:42.571+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23885be9-d24a-4c03-8ad0-d89d769d683a' of type subvolume Oct 14 06:26:42 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23885be9-d24a-4c03-8ad0-d89d769d683a' of type subvolume Oct 14 06:26:42 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "23885be9-d24a-4c03-8ad0-d89d769d683a", "force": true, "format": "json"}]: dispatch Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/23885be9-d24a-4c03-8ad0-d89d769d683a'' moved to trashcan Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:26:42 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23885be9-d24a-4c03-8ad0-d89d769d683a, vol_name:cephfs) < "" Oct 14 06:26:42 localhost nova_compute[295778]: 2025-10-14 10:26:42.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 332 B/s rd, 84 KiB/s wr, 5 op/s Oct 14 06:26:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1_e43cf1ce-1c76-4d7d-9a7a-d3a4f50ec517", "force": true, "format": "json"}]: dispatch Oct 14 06:26:43 localhost nova_compute[295778]: 2025-10-14 10:26:43.919 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1_e43cf1ce-1c76-4d7d-9a7a-d3a4f50ec517, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:26:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:26:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:26:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:26:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:26:43 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:26:43 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev 491cf263-23ba-4267-a33d-0d32e751b693 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:26:43 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev 491cf263-23ba-4267-a33d-0d32e751b693 (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:26:43 localhost ceph-mgr[300442]: [progress INFO root] Completed event 491cf263-23ba-4267-a33d-0d32e751b693 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:26:43 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:26:43 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:26:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:43 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1_e43cf1ce-1c76-4d7d-9a7a-d3a4f50ec517, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:43 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "snap_name": "8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1", "force": true, "format": "json"}]: dispatch Oct 14 06:26:43 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' Oct 14 06:26:44 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta.tmp' to config b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5/.meta' Oct 14 06:26:44 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8d71f2e5-3ef1-4eed-8f9b-8ebc337dedc1, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:26:44 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:26:44 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:26:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:26:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0. Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.610490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604610554, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2343, "num_deletes": 260, "total_data_size": 3094129, "memory_usage": 3139872, "flush_reason": "Manual Compaction"} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604629831, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3023266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39906, "largest_seqno": 42248, "table_properties": {"data_size": 3013296, "index_size": 6219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23139, "raw_average_key_size": 21, "raw_value_size": 2992579, "raw_average_value_size": 2836, "num_data_blocks": 264, "num_entries": 1055, "num_filter_entries": 1055, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760437455, "oldest_key_time": 1760437455, "file_creation_time": 1760437604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 19385 microseconds, and 7180 cpu microseconds. Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.629879) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3023266 bytes OK Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.629905) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.632167) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.632189) EVENT_LOG_v1 {"time_micros": 1760437604632182, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.632212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3084080, prev total WAL file size 3084080, number of live WAL files 2. Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.633103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133353534' seq:72057594037927935, type:22 .. '7061786F73003133383036' seq:0, type:0; will stop at (end) Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2952KB)], [75(16MB)] Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604633159, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 19915638, "oldest_snapshot_seqno": -1} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 14808 keys, 18589934 bytes, temperature: kUnknown Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604737425, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 18589934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18505255, "index_size": 46562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 37061, "raw_key_size": 398050, "raw_average_key_size": 26, "raw_value_size": 18253737, "raw_average_value_size": 1232, "num_data_blocks": 1720, "num_entries": 14808, "num_filter_entries": 14808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436204, "oldest_key_time": 0, "file_creation_time": 1760437604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2c79d823-d159-4ad5-90f1-f3d028b9aa80", "db_session_id": "J53B5YABCFHMI3BNHYZN", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.737715) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 18589934 bytes Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.744388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.8 rd, 178.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 16.1 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(12.7) write-amplify(6.1) OK, records in: 15350, records dropped: 542 output_compression: NoCompression Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.744418) EVENT_LOG_v1 {"time_micros": 1760437604744405, "job": 46, "event": "compaction_finished", "compaction_time_micros": 104357, "compaction_time_cpu_micros": 51845, "output_level": 6, "num_output_files": 1, "total_output_size": 18589934, "num_input_records": 15350, "num_output_records": 14808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604744930, "job": 46, "event": "table_file_deletion", "file_number": 77} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005486731/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760437604747261, "job": 46, "event": "table_file_deletion", "file_number": 75} Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.633031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.747295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.747302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.747305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.747308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost ceph-mon[307093]: rocksdb: (Original Log Time 2025/10/14-10:26:44.747311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 14 06:26:44 localhost nova_compute[295778]: 2025-10-14 10:26:44.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:44 localhost nova_compute[295778]: 2025-10-14 10:26:44.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 108 KiB/s wr, 6 op/s Oct 14 06:26:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:26:45 localhost nova_compute[295778]: 2025-10-14 10:26:45.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:47 localhost nova_compute[295778]: 2025-10-14 10:26:47.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "format": "json"}]: dispatch Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:47 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:47.131+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a767286-02d1-4784-b1f6-8dc63cfc6fe5' of type subvolume Oct 14 06:26:47 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a767286-02d1-4784-b1f6-8dc63cfc6fe5' of type subvolume Oct 14 06:26:47 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6a767286-02d1-4784-b1f6-8dc63cfc6fe5", "force": true, "format": "json"}]: dispatch Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6a767286-02d1-4784-b1f6-8dc63cfc6fe5'' moved to trashcan Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:26:47 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a767286-02d1-4784-b1f6-8dc63cfc6fe5, vol_name:cephfs) < "" Oct 14 06:26:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 108 KiB/s wr, 6 op/s Oct 14 06:26:47 localhost nova_compute[295778]: 2025-10-14 10:26:47.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:47 localhost nova_compute[295778]: 2025-10-14 10:26:47.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 14 06:26:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9eacb5eb-0b68-4b41-b544-5c22f17dfd26", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9eacb5eb-0b68-4b41-b544-5c22f17dfd26/.meta.tmp' Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9eacb5eb-0b68-4b41-b544-5c22f17dfd26/.meta.tmp' to config b'/volumes/_nogroup/9eacb5eb-0b68-4b41-b544-5c22f17dfd26/.meta' Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:48 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9eacb5eb-0b68-4b41-b544-5c22f17dfd26", "format": "json"}]: dispatch Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:48 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:26:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:26:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 14 06:26:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2434751429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 14 06:26:48 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 14 06:26:48 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2434751429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 14 06:26:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 222 B/s rd, 38 KiB/s wr, 2 op/s Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.977 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.978 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.979 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:49 localhost ceilometer_agent_compute[243915]: 2025-10-14 10:26:49.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 14 06:26:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e270 do_prune osdmap full prune enabled Oct 14 06:26:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e271 e271: 6 total, 6 up, 6 in Oct 14 06:26:50 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e271: 6 total, 6 up, 6 in Oct 14 06:26:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:26:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:26:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:26:50 localhost podman[349176]: 2025-10-14 10:26:50.553280709 +0000 UTC m=+0.090276050 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=) Oct 14 06:26:50 localhost podman[349177]: 2025-10-14 10:26:50.603384155 +0000 UTC m=+0.140600602 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0) Oct 14 06:26:50 localhost podman[349176]: 2025-10-14 10:26:50.675542534 +0000 UTC m=+0.212537845 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.) Oct 14 06:26:50 localhost podman[349178]: 2025-10-14 10:26:50.695543234 +0000 UTC m=+0.230389898 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:26:50 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:26:50 localhost podman[349177]: 2025-10-14 10:26:50.735138571 +0000 UTC m=+0.272354978 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 14 06:26:50 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:26:50 localhost podman[349178]: 2025-10-14 10:26:50.785773561 +0000 UTC m=+0.320620215 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:26:50 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:26:50 localhost nova_compute[295778]: 2025-10-14 10:26:50.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e271 do_prune osdmap full prune enabled Oct 14 06:26:51 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e272 e272: 6 total, 6 up, 6 in Oct 14 06:26:51 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e272: 6 total, 6 up, 6 in Oct 14 06:26:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 92 KiB/s wr, 6 op/s Oct 14 06:26:52 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9eacb5eb-0b68-4b41-b544-5c22f17dfd26", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Oct 14 06:26:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:52 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:52 localhost nova_compute[295778]: 2025-10-14 10:26:52.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:53 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:53.102 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:26:53 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:53.103 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:26:53 localhost nova_compute[295778]: 2025-10-14 10:26:53.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 48 KiB/s wr, 3 op/s Oct 14 06:26:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9eacb5eb-0b68-4b41-b544-5c22f17dfd26", "format": "json"}]: dispatch Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:26:55 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:26:55.289+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9eacb5eb-0b68-4b41-b544-5c22f17dfd26' of type subvolume Oct 14 06:26:55 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9eacb5eb-0b68-4b41-b544-5c22f17dfd26' of type subvolume Oct 14 06:26:55 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9eacb5eb-0b68-4b41-b544-5c22f17dfd26", "force": true, "format": "json"}]: dispatch Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9eacb5eb-0b68-4b41-b544-5c22f17dfd26'' moved to trashcan Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:26:55 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9eacb5eb-0b68-4b41-b544-5c22f17dfd26, vol_name:cephfs) < "" Oct 14 06:26:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:26:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 69 KiB/s wr, 4 op/s Oct 14 06:26:55 localhost nova_compute[295778]: 2025-10-14 10:26:55.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:57 localhost nova_compute[295778]: 2025-10-14 10:26:57.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:26:57 localhost nova_compute[295778]: 2025-10-14 10:26:57.270 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:26:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 69 KiB/s wr, 4 op/s Oct 14 06:26:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:57.652 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:26:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:57.652 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:26:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:57.653 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:26:58 localhost ovn_metadata_agent[161927]: 2025-10-14 10:26:58.106 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:26:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea775ff7-ddba-4326-ac8e-21825b1149a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea775ff7-ddba-4326-ac8e-21825b1149a9/.meta.tmp' Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea775ff7-ddba-4326-ac8e-21825b1149a9/.meta.tmp' to config b'/volumes/_nogroup/ea775ff7-ddba-4326-ac8e-21825b1149a9/.meta' Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:26:58 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea775ff7-ddba-4326-ac8e-21825b1149a9", "format": "json"}]: dispatch Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:26:58 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:26:58 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 14 06:26:58 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.15852 172.18.0.34:0/4172303360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 14 06:26:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s Oct 14 06:27:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e272 do_prune osdmap full prune enabled Oct 14 06:27:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e273 e273: 6 total, 6 up, 6 in Oct 14 06:27:00 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e273: 6 total, 6 up, 6 in Oct 14 06:27:00 localhost podman[246584]: time="2025-10-14T10:27:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:27:00 localhost podman[246584]: @ - - [14/Oct/2025:10:27:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:27:00 localhost podman[246584]: @ - - [14/Oct/2025:10:27:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18917 "" "Go-http-client/1.1" Oct 14 06:27:00 localhost nova_compute[295778]: 2025-10-14 10:27:00.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 62 KiB/s wr, 3 op/s Oct 14 06:27:02 localhost nova_compute[295778]: 2025-10-14 10:27:02.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:03 localhost openstack_network_exporter[248748]: ERROR 10:27:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:27:03 localhost openstack_network_exporter[248748]: ERROR 10:27:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:27:03 localhost openstack_network_exporter[248748]: ERROR 10:27:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:27:03 localhost openstack_network_exporter[248748]: ERROR 10:27:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:27:03 localhost openstack_network_exporter[248748]: Oct 14 06:27:03 localhost openstack_network_exporter[248748]: ERROR 10:27:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:27:03 localhost openstack_network_exporter[248748]: Oct 14 06:27:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:27:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea775ff7-ddba-4326-ac8e-21825b1149a9", "format": "json"}]: dispatch Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:03 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea775ff7-ddba-4326-ac8e-21825b1149a9' of type subvolume Oct 14 06:27:03 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:27:03.482+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea775ff7-ddba-4326-ac8e-21825b1149a9' of type subvolume Oct 14 06:27:03 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea775ff7-ddba-4326-ac8e-21825b1149a9", "force": true, "format": "json"}]: dispatch Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea775ff7-ddba-4326-ac8e-21825b1149a9'' moved to trashcan Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:27:03 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea775ff7-ddba-4326-ac8e-21825b1149a9, vol_name:cephfs) < "" Oct 14 06:27:03 localhost podman[349241]: 2025-10-14 10:27:03.554148273 +0000 UTC m=+0.092873019 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:27:03 localhost podman[349241]: 2025-10-14 10:27:03.565152644 +0000 UTC m=+0.103877370 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:27:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 62 KiB/s wr, 3 op/s Oct 14 06:27:03 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:27:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 76 KiB/s wr, 3 op/s Oct 14 06:27:05 localhost nova_compute[295778]: 2025-10-14 10:27:05.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "format": "json"}]: dispatch Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5189cc9c-7439-49a5-8389-5033e305ed93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:06 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5189cc9c-7439-49a5-8389-5033e305ed93", "force": true, "format": "json"}]: dispatch Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5189cc9c-7439-49a5-8389-5033e305ed93'' moved to trashcan Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:27:06 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5189cc9c-7439-49a5-8389-5033e305ed93, vol_name:cephfs) < "" Oct 14 06:27:07 localhost nova_compute[295778]: 2025-10-14 10:27:07.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 76 KiB/s wr, 3 op/s Oct 14 06:27:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:27:09 Oct 14 06:27:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:27:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:27:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', 'images', '.mgr', 'vms', 'manila_metadata'] Oct 14 06:27:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:27:09 localhost podman[349261]: 2025-10-14 10:27:09.535753724 +0000 UTC m=+0.075886049 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:27:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 76 KiB/s wr, 3 op/s Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:27:09 localhost systemd[1]: tmp-crun.Z8toMe.mount: Deactivated successfully. Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:27:09 localhost podman[349262]: 2025-10-14 10:27:09.606813735 +0000 UTC m=+0.144695440 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:27:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0021745009771077612 of space, bias 4.0, pg target 1.730902777777778 quantized to 16 (current 16) Oct 14 06:27:09 localhost podman[349261]: 2025-10-14 10:27:09.616145502 +0000 UTC m=+0.156277847 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5) Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:27:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:27:09 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:27:09 localhost podman[349262]: 2025-10-14 10:27:09.644052341 +0000 UTC m=+0.181934026 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:27:09 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Oct 14 06:27:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:27:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "snap_name": "8bdda4f6-7878-494d-ab7c-e9a419ae575a_56147d7f-c9e5-402a-90e5-6dcea107e6b5", "force": true, "format": "json"}]: dispatch Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a_56147d7f-c9e5-402a-90e5-6dcea107e6b5, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta' Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a_56147d7f-c9e5-402a-90e5-6dcea107e6b5, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:10 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "snap_name": "8bdda4f6-7878-494d-ab7c-e9a419ae575a", "force": true, "format": "json"}]: dispatch Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta.tmp' to config b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746/.meta' Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8bdda4f6-7878-494d-ab7c-e9a419ae575a, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:10 localhost ceph-mgr[300442]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 14 06:27:10 localhost nova_compute[295778]: 2025-10-14 10:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 366 B/s rd, 64 KiB/s wr, 3 op/s Oct 14 06:27:12 localhost nova_compute[295778]: 2025-10-14 10:27:12.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:27:12 localhost podman[349302]: 2025-10-14 10:27:12.542852347 +0000 UTC m=+0.082081103 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid) Oct 14 06:27:12 localhost podman[349302]: 2025-10-14 10:27:12.557223147 +0000 UTC m=+0.096451893 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 14 06:27:12 localhost podman[349303]: 2025-10-14 10:27:12.59284646 +0000 UTC m=+0.128265915 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 14 06:27:12 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:27:12 localhost podman[349303]: 2025-10-14 10:27:12.628866524 +0000 UTC m=+0.164286059 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd) Oct 14 06:27:12 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:27:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e273 do_prune osdmap full prune enabled Oct 14 06:27:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : mgrmap e48: np0005486731.swasqz(active, since 19m), standbys: np0005486732.pasqzz, np0005486733.primvu Oct 14 06:27:12 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e274 e274: 6 total, 6 up, 6 in Oct 14 06:27:12 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e274: 6 total, 6 up, 6 in Oct 14 06:27:13 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "format": "json"}]: dispatch Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 14 06:27:13 localhost ceph-fcadf6e2-9176-5818-a8d0-37b19acf8eaf-mgr-np0005486731-swasqz[300438]: 2025-10-14T10:27:13.281+0000 7ff5d7f75640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a9c5d3c-6540-4f73-b15c-cbb5368b3746' of type subvolume Oct 14 06:27:13 localhost ceph-mgr[300442]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a9c5d3c-6540-4f73-b15c-cbb5368b3746' of type subvolume Oct 14 06:27:13 localhost ceph-mgr[300442]: log_channel(audit) log [DBG] : from='client.15852 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a9c5d3c-6540-4f73-b15c-cbb5368b3746", "force": true, "format": "json"}]: dispatch Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a9c5d3c-6540-4f73-b15c-cbb5368b3746'' moved to trashcan Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 14 06:27:13 localhost ceph-mgr[300442]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a9c5d3c-6540-4f73-b15c-cbb5368b3746, vol_name:cephfs) < "" Oct 14 06:27:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 72 KiB/s wr, 3 op/s Oct 14 06:27:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:15 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 72 KiB/s wr, 4 op/s Oct 14 06:27:15 localhost nova_compute[295778]: 2025-10-14 10:27:15.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:17 localhost nova_compute[295778]: 2025-10-14 10:27:17.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:17 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 72 KiB/s wr, 4 op/s Oct 14 06:27:19 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 72 KiB/s wr, 4 op/s Oct 14 06:27:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e274 do_prune osdmap full prune enabled Oct 14 06:27:20 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 e275: 6 total, 6 up, 6 in Oct 14 06:27:20 localhost ceph-mon[307093]: log_channel(cluster) log [DBG] : osdmap e275: 6 total, 6 up, 6 in Oct 14 06:27:20 localhost nova_compute[295778]: 2025-10-14 10:27:20.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:27:21 localhost podman[349339]: 2025-10-14 10:27:21.55987507 +0000 UTC m=+0.097105451 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm) Oct 14 06:27:21 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 462 B/s rd, 50 KiB/s wr, 3 op/s Oct 14 06:27:21 localhost podman[349339]: 2025-10-14 10:27:21.601224654 +0000 UTC m=+0.138455025 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container) Oct 14 06:27:21 localhost systemd[1]: tmp-crun.pnJU0g.mount: Deactivated successfully. Oct 14 06:27:21 localhost podman[349341]: 2025-10-14 10:27:21.618946263 +0000 UTC m=+0.147567086 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 14 06:27:21 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:27:21 localhost podman[349340]: 2025-10-14 10:27:21.662485435 +0000 UTC m=+0.192423563 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller) Oct 14 06:27:21 localhost podman[349341]: 2025-10-14 10:27:21.686104091 +0000 UTC m=+0.214724904 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:27:21 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:27:21 localhost podman[349340]: 2025-10-14 10:27:21.717168703 +0000 UTC m=+0.247106831 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 14 06:27:21 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:27:22 localhost nova_compute[295778]: 2025-10-14 10:27:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:23 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 44 KiB/s wr, 3 op/s Oct 14 06:27:25 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:25 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s Oct 14 06:27:25 localhost nova_compute[295778]: 2025-10-14 10:27:25.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:27 localhost nova_compute[295778]: 2025-10-14 10:27:27.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:27 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s Oct 14 06:27:29 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s Oct 14 06:27:30 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:30 localhost podman[246584]: time="2025-10-14T10:27:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:27:30 localhost podman[246584]: @ - - [14/Oct/2025:10:27:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:27:30 localhost podman[246584]: @ - - [14/Oct/2025:10:27:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18918 "" "Go-http-client/1.1" Oct 14 06:27:30 localhost nova_compute[295778]: 2025-10-14 10:27:30.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:31.562 161932 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:6b:50', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '6a:59:81:01:bc:8b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 14 06:27:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:31.563 161932 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 14 06:27:31 localhost nova_compute[295778]: 2025-10-14 10:27:31.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:31 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:31.565 161932 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5830d1b9-dd16-4a23-879b-f28430ab4793, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 14 06:27:31 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s wr, 0 op/s Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.907 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.926 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.927 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.928 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Auditing locally available compute resources for np0005486731.localdomain (node: np0005486731.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 14 06:27:32 localhost nova_compute[295778]: 2025-10-14 10:27:32.928 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:27:33 localhost openstack_network_exporter[248748]: ERROR 10:27:33 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:27:33 localhost openstack_network_exporter[248748]: ERROR 10:27:33 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:27:33 localhost openstack_network_exporter[248748]: Oct 14 06:27:33 localhost openstack_network_exporter[248748]: ERROR 10:27:33 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:27:33 localhost openstack_network_exporter[248748]: Oct 14 06:27:33 localhost openstack_network_exporter[248748]: ERROR 10:27:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:27:33 localhost openstack_network_exporter[248748]: ERROR 10:27:33 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:27:33 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:27:33 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3826270580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.388 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.579 2 WARNING nova.virt.libvirt.driver [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.580 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Hypervisor/Node resource view: name=np0005486731.localdomain free_ram=11297MB free_disk=41.83695602416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.581 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.581 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:27:33 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 0 op/s Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.642 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.643 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Final resource view: name=np0005486731.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 14 06:27:33 localhost nova_compute[295778]: 2025-10-14 10:27:33.962 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 14 06:27:34 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 14 06:27:34 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1947865076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 14 06:27:34 localhost nova_compute[295778]: 2025-10-14 10:27:34.417 2 DEBUG oslo_concurrency.processutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 14 06:27:34 localhost nova_compute[295778]: 2025-10-14 10:27:34.424 2 DEBUG nova.compute.provider_tree [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed in ProviderTree for provider: ebb6de71-88e5-4477-92fc-f2b9532f7fcd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 14 06:27:34 localhost nova_compute[295778]: 2025-10-14 10:27:34.442 2 DEBUG nova.scheduler.client.report [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Inventory has not changed for provider ebb6de71-88e5-4477-92fc-f2b9532f7fcd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 14 06:27:34 localhost nova_compute[295778]: 2025-10-14 10:27:34.445 2 DEBUG nova.compute.resource_tracker [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Compute_service record updated for np0005486731.localdomain:np0005486731.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 14 06:27:34 localhost nova_compute[295778]: 2025-10-14 10:27:34.445 2 DEBUG oslo_concurrency.lockutils [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:27:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:27:34 localhost podman[349449]: 2025-10-14 10:27:34.539268626 +0000 UTC m=+0.080690636 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3) Oct 14 06:27:34 localhost podman[349449]: 2025-10-14 10:27:34.578143815 +0000 UTC m=+0.119565765 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 14 06:27:34 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:27:35 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:35 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 0 op/s Oct 14 06:27:35 localhost nova_compute[295778]: 2025-10-14 10:27:35.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:37 localhost nova_compute[295778]: 2025-10-14 10:27:37.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:37 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:38 localhost nova_compute[295778]: 2025-10-14 10:27:38.443 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:38 localhost nova_compute[295778]: 2025-10-14 10:27:38.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:38 localhost nova_compute[295778]: 2025-10-14 10:27:38.904 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 14 06:27:38 localhost nova_compute[295778]: 2025-10-14 10:27:38.905 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 14 06:27:38 localhost nova_compute[295778]: 2025-10-14 10:27:38.924 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 14 06:27:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:27:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:39 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:27:39 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:39 localhost nova_compute[295778]: 2025-10-14 10:27:39.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:40 localhost nova_compute[295778]: 2025-10-14 10:27:40.192 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:40 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:27:40 localhost podman[349467]: 2025-10-14 10:27:40.539756958 +0000 UTC m=+0.083543912 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 14 06:27:40 localhost podman[349468]: 2025-10-14 10:27:40.572629288 +0000 UTC m=+0.114604784 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 14 06:27:40 localhost podman[349467]: 2025-10-14 10:27:40.577225979 +0000 UTC m=+0.121012903 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 14 06:27:40 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:27:40 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:27:40 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:27:40 localhost podman[349468]: 2025-10-14 10:27:40.634411183 +0000 UTC m=+0.176386689 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 14 06:27:40 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:27:40 localhost nova_compute[295778]: 2025-10-14 10:27:40.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:41 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:41 localhost nova_compute[295778]: 2025-10-14 10:27:41.902 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:41 localhost nova_compute[295778]: 2025-10-14 10:27:41.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:41 localhost nova_compute[295778]: 2025-10-14 10:27:41.903 2 DEBUG nova.compute.manager [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 14 06:27:42 localhost nova_compute[295778]: 2025-10-14 10:27:42.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:27:43 localhost podman[349509]: 2025-10-14 10:27:43.544566428 +0000 UTC m=+0.083739416 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 14 06:27:43 localhost podman[349509]: 2025-10-14 10:27:43.579590186 +0000 UTC m=+0.118763164 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0) Oct 14 06:27:43 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:43 localhost systemd[1]: tmp-crun.HQ1I99.mount: Deactivated successfully. Oct 14 06:27:43 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:27:43 localhost podman[349508]: 2025-10-14 10:27:43.606912009 +0000 UTC m=+0.149966550 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:27:43 localhost podman[349508]: 2025-10-14 10:27:43.615314441 +0000 UTC m=+0.158368982 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2) Oct 14 06:27:43 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain.devices.0}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain.devices.0}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486731.localdomain}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486733.localdomain}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:44 localhost nova_compute[295778]: 2025-10-14 10:27:44.905 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain.devices.0}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:44 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005486732.localdomain}] v 0) Oct 14 06:27:44 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:45 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 14 06:27:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 14 06:27:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 14 06:27:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:27:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 14 06:27:45 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mgr[300442]: [progress INFO root] update: starting ev e882b2e8-1514-4c99-ad56-c077f4e0087f (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:27:45 localhost ceph-mgr[300442]: [progress INFO root] complete: finished ev e882b2e8-1514-4c99-ad56-c077f4e0087f (Updating node-proxy deployment (+3 -> 3)) Oct 14 06:27:45 localhost ceph-mgr[300442]: [progress INFO root] Completed event e882b2e8-1514-4c99-ad56-c077f4e0087f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 14 06:27:45 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 14 06:27:45 localhost ceph-mon[307093]: log_channel(audit) log [DBG] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 14 06:27:45 localhost nova_compute[295778]: 2025-10-14 10:27:45.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 14 06:27:45 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:45 localhost nova_compute[295778]: 2025-10-14 10:27:45.903 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:45 localhost nova_compute[295778]: 2025-10-14 10:27:45.904 2 DEBUG oslo_service.periodic_task [None req-845103e5-b710-4226-9615-7a136100dc05 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 14 06:27:47 localhost nova_compute[295778]: 2025-10-14 10:27:47.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:47 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:49 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:49 localhost ceph-mgr[300442]: [progress INFO root] Writing back 50 completed events Oct 14 06:27:49 localhost ceph-mon[307093]: mon.np0005486731@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 14 06:27:49 localhost ceph-mon[307093]: log_channel(audit) log [INF] : from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:49 localhost ceph-mon[307093]: from='mgr.44286 172.18.0.106:0/3162921916' entity='mgr.np0005486731.swasqz' Oct 14 06:27:50 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:50 localhost nova_compute[295778]: 2025-10-14 10:27:50.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:51 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:52 localhost nova_compute[295778]: 2025-10-14 10:27:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749. Oct 14 06:27:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec. Oct 14 06:27:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3. Oct 14 06:27:52 localhost systemd[1]: tmp-crun.MEyoYr.mount: Deactivated successfully. Oct 14 06:27:52 localhost podman[349688]: 2025-10-14 10:27:52.566012069 +0000 UTC m=+0.102352380 container health_status 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 14 06:27:52 localhost systemd[1]: tmp-crun.SUEwEM.mount: Deactivated successfully. Oct 14 06:27:52 localhost podman[349687]: 2025-10-14 10:27:52.614153383 +0000 UTC m=+0.149900929 container health_status 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41) Oct 14 06:27:52 localhost podman[349687]: 2025-10-14 10:27:52.631183494 +0000 UTC m=+0.166931000 container exec_died 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9-minimal, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 14 06:27:52 localhost podman[349688]: 2025-10-14 10:27:52.642499903 +0000 UTC m=+0.178840164 container exec_died 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:27:52 localhost systemd[1]: 306ac78d0c3e3482e95310c418e7c05db6e46e13f83925cef6b2094f39dea749.service: Deactivated successfully. Oct 14 06:27:52 localhost systemd[1]: 328895f20055484ac59d6713a4dae49fd1208587c755852ca069c7164515eeec.service: Deactivated successfully. Oct 14 06:27:52 localhost podman[349689]: 2025-10-14 10:27:52.700377185 +0000 UTC m=+0.234005604 container health_status c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 14 06:27:52 localhost podman[349689]: 2025-10-14 10:27:52.743385303 +0000 UTC m=+0.277013662 container exec_died c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 14 06:27:52 localhost systemd[1]: c37891ecbe8e01d1fb196442236346fa0dda9990d586d0941cb73561d4df1ef3.service: Deactivated successfully. Oct 14 06:27:53 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:55 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:27:55 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:55 localhost nova_compute[295778]: 2025-10-14 10:27:55.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:56 localhost sshd[349756]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:27:56 localhost systemd-logind[760]: New session 74 of user zuul. Oct 14 06:27:56 localhost systemd[1]: Started Session 74 of User zuul. Oct 14 06:27:57 localhost python3[349778]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-2d7c-d7b0-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 14 06:27:57 localhost nova_compute[295778]: 2025-10-14 10:27:57.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:27:57 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:27:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:57.653 161932 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 14 06:27:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:57.654 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 14 06:27:57 localhost ovn_metadata_agent[161927]: 2025-10-14 10:27:57.654 161932 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 14 06:27:59 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:00 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:28:00 localhost podman[246584]: time="2025-10-14T10:28:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 14 06:28:00 localhost podman[246584]: @ - - [14/Oct/2025:10:28:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144488 "" "Go-http-client/1.1" Oct 14 06:28:00 localhost podman[246584]: @ - - [14/Oct/2025:10:28:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18909 "" "Go-http-client/1.1" Oct 14 06:28:00 localhost nova_compute[295778]: 2025-10-14 10:28:00.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:01 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v790: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:02 localhost nova_compute[295778]: 2025-10-14 10:28:02.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:02 localhost systemd[1]: session-74.scope: Deactivated successfully. Oct 14 06:28:02 localhost systemd-logind[760]: Session 74 logged out. Waiting for processes to exit. Oct 14 06:28:02 localhost systemd-logind[760]: Removed session 74. Oct 14 06:28:03 localhost openstack_network_exporter[248748]: ERROR 10:28:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:28:03 localhost openstack_network_exporter[248748]: ERROR 10:28:03 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 14 06:28:03 localhost openstack_network_exporter[248748]: ERROR 10:28:03 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 14 06:28:03 localhost openstack_network_exporter[248748]: ERROR 10:28:03 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 14 06:28:03 localhost openstack_network_exporter[248748]: Oct 14 06:28:03 localhost openstack_network_exporter[248748]: ERROR 10:28:03 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 14 06:28:03 localhost openstack_network_exporter[248748]: Oct 14 06:28:03 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:05 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:28:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7. Oct 14 06:28:05 localhost podman[349782]: 2025-10-14 10:28:05.545154998 +0000 UTC m=+0.081073856 container health_status 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute) Oct 14 06:28:05 localhost podman[349782]: 2025-10-14 10:28:05.558097771 +0000 UTC m=+0.094016619 container exec_died 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 14 06:28:05 localhost systemd[1]: 59108b9e9d08bc7c4fd4f99c3f1d6f36f8af4957b21bfab91f9328e135f068f7.service: Deactivated successfully. Oct 14 06:28:05 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:05 localhost nova_compute[295778]: 2025-10-14 10:28:05.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:07 localhost nova_compute[295778]: 2025-10-14 10:28:07.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:07 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:09 localhost ceph-mgr[300442]: [balancer INFO root] Optimize plan auto_2025-10-14_10:28:09 Oct 14 06:28:09 localhost ceph-mgr[300442]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 14 06:28:09 localhost ceph-mgr[300442]: [balancer INFO root] do_upmap Oct 14 06:28:09 localhost ceph-mgr[300442]: [balancer INFO root] pools ['.mgr', 'volumes', 'manila_data', 'images', 'vms', 'manila_metadata', 'backups'] Oct 14 06:28:09 localhost ceph-mgr[300442]: [balancer INFO root] prepared 0/10 changes Oct 14 06:28:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:28:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:28:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:28:09 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] _maybe_adjust Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325819636376326 of space, bias 1.0, pg target 0.6651639272752652 quantized to 32 (current 32) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 14 06:28:09 localhost ceph-mgr[300442]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0022396596698771635 of space, bias 4.0, pg target 1.782769097222222 quantized to 16 (current 16) Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: images, start_after= Oct 14 06:28:09 localhost ceph-mgr[300442]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 14 06:28:10 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:28:10 localhost ceph-mgr[300442]: [volumes INFO mgr_util] scanning for idle connections.. Oct 14 06:28:10 localhost ceph-mgr[300442]: [volumes INFO mgr_util] cleaning up connections: [] Oct 14 06:28:10 localhost nova_compute[295778]: 2025-10-14 10:28:10.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:11 localhost ceph-mgr[300442]: [devicehealth INFO root] Check health Oct 14 06:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242. Oct 14 06:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc. Oct 14 06:28:11 localhost podman[349801]: 2025-10-14 10:28:11.52827897 +0000 UTC m=+0.071166934 container health_status 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 14 06:28:11 localhost podman[349801]: 2025-10-14 10:28:11.537121614 +0000 UTC m=+0.080009628 container exec_died 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1e4eeec18f8da2b364b39b7a7358aef5, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009) Oct 14 06:28:11 localhost systemd[1]: 6d16434798e381a10f8c7b375f27b9598c1cbf9659693abeb156fb81b7795242.service: Deactivated successfully. Oct 14 06:28:11 localhost podman[349802]: 2025-10-14 10:28:11.596626329 +0000 UTC m=+0.132638991 container health_status fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:28:11 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:11 localhost podman[349802]: 2025-10-14 10:28:11.609961662 +0000 UTC m=+0.145974294 container exec_died fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 14 06:28:11 localhost systemd[1]: fcf956b7e943122d3c9d31c1efe8f2402ca5f764dde8ca7d77d31a77a5cfeefc.service: Deactivated successfully. Oct 14 06:28:12 localhost nova_compute[295778]: 2025-10-14 10:28:12.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 33 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 14 06:28:13 localhost ceph-mgr[300442]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 14 06:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be. Oct 14 06:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74. Oct 14 06:28:14 localhost systemd[1]: tmp-crun.DF35nc.mount: Deactivated successfully. Oct 14 06:28:14 localhost podman[349840]: 2025-10-14 10:28:14.552501675 +0000 UTC m=+0.093789603 container health_status 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 14 06:28:14 localhost podman[349840]: 2025-10-14 10:28:14.561810782 +0000 UTC m=+0.103098730 container exec_died 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 14 06:28:14 localhost systemd[1]: 46ab027993ee8f264b62b3639476b44731cb7eec04dbe3439282c9d213d170be.service: Deactivated successfully. Oct 14 06:28:14 localhost podman[349841]: 2025-10-14 10:28:14.613831048 +0000 UTC m=+0.152115686 container health_status 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3) Oct 14 06:28:14 localhost podman[349841]: 2025-10-14 10:28:14.651877146 +0000 UTC m=+0.190161814 container exec_died 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=0468cb21803d466b2abfe00835cf1d2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd) Oct 14 06:28:14 localhost systemd[1]: 6475c681cee1168ed7641152be8f3f58d647458b2b49e5d76b1efd46d5f7ec74.service: Deactivated successfully. Oct 14 06:28:15 localhost sshd[349880]: main: sshd: ssh-rsa algorithm is disabled Oct 14 06:28:15 localhost systemd-logind[760]: New session 75 of user zuul. Oct 14 06:28:15 localhost ceph-mon[307093]: mon.np0005486731@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 14 06:28:15 localhost systemd[1]: Started Session 75 of User zuul.